From patchwork Thu Jul 27 14:54:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 688B8C04A6A for ; Thu, 27 Jul 2023 14:55:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 190F410E5AA; Thu, 27 Jul 2023 14:55:26 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 98B1410E08C; Thu, 27 Jul 2023 14:55:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469718; x=1722005718; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B9qnVY+VRZNAFFFX4OtVO4JHbUPNlQ3pFjeRAUcBYnk=; b=JCimNEIVCTC91X7YT3T8IL9Dp1LQnXBG1eLUYXqff5AH6R1Toauyeq3K wUpaXl7y3ys8/q17Uhx0u3o9kVAknUMHFlg6LVoBs8RcP3IyEWpZXGSh+ sZ7J7rsWO0glzdsYzuDtN24IoJ5RtQNv67KCs7A5ksmV8Su9WlHDm270s vynsMdDwmR8Ulhd3NoA9Uf9QpTYlUJHSCVBwH2DgPiEvO9Jken8YbKri7 Ek5QmpmuKxInOH+vv/oATC+4tbEIxfH/UAQJyjoTKA7GBA/wERbPL7du4 S3Iu2j0Sze2ulZmzbSHcJ4utnxfM1bx+Xxljy6yIuOqJ3uXKcy2HCMC8C g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268379" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268379" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433711" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:18 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 1/8] drm/i915: Skip clflush after GPU writes on Meteorlake Date: Thu, 27 Jul 2023 15:54:57 +0100 Message-Id: <20230727145504.1919316-2-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Matt Roper , Matthew Auld , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin On Meteorlake CPU cache will not contain stale data after GPU access since write-invalidate protocol is used, which means there is no need to flush before potentially transitioning the buffer to a non-coherent domain. Use the opportunity to documet the situation on discrete too. Signed-off-by: Tvrtko Ursulin Cc: Matt Roper Cc: Fei Yang Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Matt Roper Reviewed-by: Fei Yang --- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index ffddec1d2a76..57db9c581bf6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -24,9 +24,22 @@ static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = to_i915(obj->base.dev); + /* + * Discrete GPUs never dirty the CPU cache. + */ if (IS_DGFX(i915)) return false; + /* + * Cache snooping on Meteorlake is using write-invalidate so GPU writes + * never end up in the CPU cache. + * + * QQQ: Do other snooping platforms behave identicaly and could we + * therefore write this as "if !HAS_LLC(i915) && HAS_SNOOP(i915)"? + */ + if (IS_METEORLAKE(i915)) + return false; + /* * For objects created by userspace through GEM_CREATE with pat_index * set by set_pat extension, i915_gem_object_has_cache_level() will From patchwork Thu Jul 27 14:54:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FDE1C001DC for ; Thu, 27 Jul 2023 14:55:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F222710E5A8; Thu, 27 Jul 2023 14:55:25 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id C07B410E5A1; Thu, 27 Jul 2023 14:55:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469720; x=1722005720; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jhtbIKmonzY9f7pTH5sQ3hNsH+9Lyi7X9at2f7882PY=; b=fBy0cfahB7aAmMvN6yJMpK8ipezfq4zdVxPRjBV16e3tNwl7CR/vfV6O jmiaKjl6XTsHegoIDsnaRh5RytAglCNhUsumoTusakmfsHZTSYTBY/ABj JmEeDuNbnYeXUeYecSzWMWVO9F5bbxuNYd+eMOdDNaYvflqW/gvDt0WH7 wVE0aw9bUlq3Erna66kPKyTT5MaS9SMcuzxec832ftnmi77P6zpU+A+Cq LWFzoW8ZpyOPzm47sIg9uRRBmKMa7gnmyvH4/6ykFmEmIxQVY1as6fjw4 DUhT7LA5GU0CYAULsRyb6sJa15cfgOA2TI2TmRvmK6NQQ7AjWok4kMz8r g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268385" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268385" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433716" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:19 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 2/8] drm/i915: Split PTE encode between Gen12 and Meteorlake Date: Thu, 27 Jul 2023 15:54:58 +0100 Message-Id: <20230727145504.1919316-3-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin No need to run extra instructions which will never trigger on platforms before Meteorlake. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index c8568e5d1147..862ac1d2de25 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -63,6 +63,30 @@ static u64 gen12_pte_encode(dma_addr_t addr, { gen8_pte_t pte = addr | GEN8_PAGE_PRESENT | GEN8_PAGE_RW; + if (unlikely(flags & PTE_READ_ONLY)) + pte &= ~GEN8_PAGE_RW; + + if (flags & PTE_LM) + pte |= GEN12_PPGTT_PTE_LM; + + if (pat_index & BIT(0)) + pte |= GEN12_PPGTT_PTE_PAT0; + + if (pat_index & BIT(1)) + pte |= GEN12_PPGTT_PTE_PAT1; + + if (pat_index & BIT(2)) + pte |= GEN12_PPGTT_PTE_PAT2; + + return pte; +} + +static u64 mtl_pte_encode(dma_addr_t addr, + unsigned int pat_index, + u32 flags) +{ + gen8_pte_t pte = addr | GEN8_PAGE_PRESENT | GEN8_PAGE_RW; + if (unlikely(flags & PTE_READ_ONLY)) pte &= ~GEN8_PAGE_RW; @@ -995,6 +1019,8 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, */ ppgtt->vm.alloc_scratch_dma = alloc_pt_dma; + if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 70)) + ppgtt->vm.pte_encode = mtl_pte_encode; if (GRAPHICS_VER(gt->i915) >= 12) ppgtt->vm.pte_encode = gen12_pte_encode; else From patchwork Thu Jul 27 14:54:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8D80C001E0 for ; Thu, 27 Jul 2023 14:55:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DBB0810E5A6; Thu, 27 Jul 2023 14:55:24 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9509A10E5A1; Thu, 27 Jul 2023 14:55:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469721; x=1722005721; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RuFboNHj7PdcR0a4ds+19URJ0ay46wjEH2vihc0Smqg=; b=U43Rd8dUqmDAGWt62c+mNcv6vFc9KTR0pbI0QaoQEUIoxeeCa60SwiFe yI0m0ODuryRawe+nRsiykVsAglXs/645l5EtiHQJoPxieP2OpwywiADE/ +dx4VfXyTOrZvV9hkfwitczMpHSyaGW0hz3mIhVNk9veYAsvSwiFereCO u2wI5GH6BBC2MitBXU27gO5fZulewFeEgfh9O2wfKDUrlZ8O9ubCD2itN v6O7dJhaLe8kumxxhQeEcxQouBJ53bTAFBaAsjkB8yP98jGFtJt3FYeI1 RYlktkcNHNZAPtQ4qhJT+aKh7oz+vMjkc3xnS9YbYOKvM7BPq4xY1C+NG A==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268396" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268396" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433726" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:21 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 3/8] drm/i915: Cache PAT index used by the driver Date: Thu, 27 Jul 2023 15:54:59 +0100 Message-Id: <20230727145504.1919316-4-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Eliminate a bunch of runtime calls to i915_gem_get_pat_index() by caching the interesting PAT indices in struct drm_i915_private. They are static per platfrom so no need to consult a function every time. Signed-off-by: Tvrtko Ursulin Cc: Matt Roper Cc: Fei Yang --- drivers/gpu/drm/i915/Makefile | 1 + .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +-- drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 7 ++--- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 26 ++++++++++++------- .../gpu/drm/i915/gem/selftests/huge_pages.c | 2 +- drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 4 +-- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 4 +-- drivers/gpu/drm/i915/gt/intel_ggtt.c | 8 ++---- drivers/gpu/drm/i915/gt/intel_migrate.c | 11 +++----- drivers/gpu/drm/i915/gt/selftest_migrate.c | 9 +++---- drivers/gpu/drm/i915/gt/selftest_reset.c | 14 +++------- drivers/gpu/drm/i915/gt/selftest_tlb.c | 5 ++-- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 8 ++---- drivers/gpu/drm/i915/i915_cache.c | 18 +++++++++++++ drivers/gpu/drm/i915/i915_cache.h | 13 ++++++++++ drivers/gpu/drm/i915/i915_driver.c | 3 +++ drivers/gpu/drm/i915/i915_drv.h | 2 ++ drivers/gpu/drm/i915/i915_gem.c | 8 ++---- drivers/gpu/drm/i915/i915_gpu_error.c | 8 ++---- drivers/gpu/drm/i915/selftests/i915_gem.c | 5 +--- .../gpu/drm/i915/selftests/i915_gem_evict.c | 4 +-- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 11 +++----- .../drm/i915/selftests/intel_memory_region.c | 4 +-- .../gpu/drm/i915/selftests/mock_gem_device.c | 2 ++ 24 files changed, 89 insertions(+), 91 deletions(-) create mode 100644 drivers/gpu/drm/i915/i915_cache.c create mode 100644 drivers/gpu/drm/i915/i915_cache.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index c5fc91cd58e7..905a51a16588 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -35,6 +35,7 @@ subdir-ccflags-y += -I$(srctree)/$(src) # core driver code i915-y += i915_driver.o \ i915_drm_client.o \ + i915_cache.o \ i915_config.o \ i915_getparam.o \ i915_ioctl.o \ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 5a687a3686bd..0a1d40220020 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1330,8 +1330,7 @@ static void *reloc_iomap(struct i915_vma *batch, ggtt->vm.insert_page(&ggtt->vm, i915_gem_object_get_dma_address(obj, page), offset, - i915_gem_get_pat_index(ggtt->vm.i915, - I915_CACHE_NONE), + eb->i915->pat_uc, 0); } else { offset += page << PAGE_SHIFT; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 5b0a5cf9a98a..1c8eb806b7d3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -563,11 +563,8 @@ static void dbg_poison(struct i915_ggtt *ggtt, while (size) { void __iomem *s; - ggtt->vm.insert_page(&ggtt->vm, addr, - ggtt->error_capture.start, - i915_gem_get_pat_index(ggtt->vm.i915, - I915_CACHE_NONE), - 0); + ggtt->vm.insert_page(&ggtt->vm, addr, ggtt->error_capture.start, + ggtt->vm.i915->pat_uc, 0); mb(); s = io_mapping_map_wc(&ggtt->iomap, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 7078af2f8f79..6bd6c239f4ac 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -58,6 +58,16 @@ i915_ttm_cache_level(struct drm_i915_private *i915, struct ttm_resource *res, I915_CACHE_NONE; } +static unsigned int +i915_ttm_cache_pat(struct drm_i915_private *i915, struct ttm_resource *res, + struct ttm_tt *ttm) +{ + return ((HAS_LLC(i915) || HAS_SNOOP(i915)) && + !i915_ttm_gtt_binds_lmem(res) && + ttm->caching == ttm_cached) ? i915->pat_wb : + i915->pat_uc; +} + static struct intel_memory_region * i915_ttm_region(struct ttm_device *bdev, int ttm_mem_type) { @@ -196,7 +206,7 @@ static struct dma_fence *i915_ttm_accel_move(struct ttm_buffer_object *bo, struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); struct i915_request *rq; struct ttm_tt *src_ttm = bo->ttm; - enum i915_cache_level src_level, dst_level; + unsigned int src_pat, dst_pat; int ret; if (!to_gt(i915)->migrate.context || intel_gt_is_wedged(to_gt(i915))) @@ -206,16 +216,15 @@ static struct dma_fence *i915_ttm_accel_move(struct ttm_buffer_object *bo, if (I915_SELFTEST_ONLY(fail_gpu_migration)) clear = true; - dst_level = i915_ttm_cache_level(i915, dst_mem, dst_ttm); + dst_pat = i915_ttm_cache_pat(i915, dst_mem, dst_ttm); if (clear) { if (bo->type == ttm_bo_type_kernel && !I915_SELFTEST_ONLY(fail_gpu_migration)) return ERR_PTR(-EINVAL); intel_engine_pm_get(to_gt(i915)->migrate.context->engine); - ret = intel_context_migrate_clear(to_gt(i915)->migrate.context, deps, - dst_st->sgl, - i915_gem_get_pat_index(i915, dst_level), + ret = intel_context_migrate_clear(to_gt(i915)->migrate.context, + deps, dst_st->sgl, dst_pat, i915_ttm_gtt_binds_lmem(dst_mem), 0, &rq); } else { @@ -225,14 +234,13 @@ static struct dma_fence *i915_ttm_accel_move(struct ttm_buffer_object *bo, if (IS_ERR(src_rsgt)) return ERR_CAST(src_rsgt); - src_level = i915_ttm_cache_level(i915, bo->resource, src_ttm); + src_pat = i915_ttm_cache_pat(i915, bo->resource, src_ttm); intel_engine_pm_get(to_gt(i915)->migrate.context->engine); ret = intel_context_migrate_copy(to_gt(i915)->migrate.context, deps, src_rsgt->table.sgl, - i915_gem_get_pat_index(i915, src_level), + src_pat, i915_ttm_gtt_binds_lmem(bo->resource), - dst_st->sgl, - i915_gem_get_pat_index(i915, dst_level), + dst_st->sgl, dst_pat, i915_ttm_gtt_binds_lmem(dst_mem), &rq); diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 6b9f6cf50bf6..6bddd733d796 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -354,7 +354,7 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single) obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; - obj->pat_index = i915_gem_get_pat_index(i915, I915_CACHE_NONE); + obj->pat_index = i915->pat_uc; return obj; } diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c index c2bdc133c89a..fb69f667652a 100644 --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c @@ -226,9 +226,7 @@ static int gen6_ppgtt_init_scratch(struct gen6_ppgtt *ppgtt) return ret; vm->scratch[0]->encode = - vm->pte_encode(px_dma(vm->scratch[0]), - i915_gem_get_pat_index(vm->i915, - I915_CACHE_NONE), + vm->pte_encode(px_dma(vm->scratch[0]), vm->i915->pat_uc, PTE_READ_ONLY); vm->scratch[1] = vm->alloc_pt_dma(vm, I915_GTT_PAGE_SIZE_4K); diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 862ac1d2de25..675f71f06e89 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -874,9 +874,7 @@ static int gen8_init_scratch(struct i915_address_space *vm) pte_flags |= PTE_LM; vm->scratch[0]->encode = - vm->pte_encode(px_dma(vm->scratch[0]), - i915_gem_get_pat_index(vm->i915, - I915_CACHE_NONE), + vm->pte_encode(px_dma(vm->scratch[0]), vm->i915->pat_uc, pte_flags); for (i = 1; i <= vm->top; i++) { diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index dd0ed941441a..fca61ddca8ad 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -921,9 +921,7 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size) pte_flags |= PTE_LM; ggtt->vm.scratch[0]->encode = - ggtt->vm.pte_encode(px_dma(ggtt->vm.scratch[0]), - i915_gem_get_pat_index(i915, - I915_CACHE_NONE), + ggtt->vm.pte_encode(px_dma(ggtt->vm.scratch[0]), i915->pat_uc, pte_flags); return 0; @@ -1298,9 +1296,7 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm) */ vma->resource->bound_flags = 0; vma->ops->bind_vma(vm, NULL, vma->resource, - obj ? obj->pat_index : - i915_gem_get_pat_index(vm->i915, - I915_CACHE_NONE), + obj ? obj->pat_index : vm->i915->pat_uc, was_bound); if (obj) { /* only used during resume => exclusive access */ diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 576e5ef0289b..b7a61b02f64c 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -45,9 +45,7 @@ static void xehpsdv_toggle_pdes(struct i915_address_space *vm, * Insert a dummy PTE into every PT that will map to LMEM to ensure * we have a correctly setup PDE structure for later use. */ - vm->insert_page(vm, 0, d->offset, - i915_gem_get_pat_index(vm->i915, I915_CACHE_NONE), - PTE_LM); + vm->insert_page(vm, 0, d->offset, vm->i915->pat_uc, PTE_LM); GEM_BUG_ON(!pt->is_compact); d->offset += SZ_2M; } @@ -65,9 +63,7 @@ static void xehpsdv_insert_pte(struct i915_address_space *vm, * alignment is 64K underneath for the pt, and we are careful * not to access the space in the void. */ - vm->insert_page(vm, px_dma(pt), d->offset, - i915_gem_get_pat_index(vm->i915, I915_CACHE_NONE), - PTE_LM); + vm->insert_page(vm, px_dma(pt), d->offset, vm->i915->pat_uc, PTE_LM); d->offset += SZ_64K; } @@ -77,8 +73,7 @@ static void insert_pte(struct i915_address_space *vm, { struct insert_pte_data *d = data; - vm->insert_page(vm, px_dma(pt), d->offset, - i915_gem_get_pat_index(vm->i915, I915_CACHE_NONE), + vm->insert_page(vm, px_dma(pt), d->offset, vm->i915->pat_uc, i915_gem_object_is_lmem(pt->base) ? PTE_LM : 0); d->offset += PAGE_SIZE; } diff --git a/drivers/gpu/drm/i915/gt/selftest_migrate.c b/drivers/gpu/drm/i915/gt/selftest_migrate.c index 3def5ca72dec..a67ede65d816 100644 --- a/drivers/gpu/drm/i915/gt/selftest_migrate.c +++ b/drivers/gpu/drm/i915/gt/selftest_migrate.c @@ -904,8 +904,7 @@ static int perf_clear_blt(void *arg) err = __perf_clear_blt(gt->migrate.context, dst->mm.pages->sgl, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), + gt->i915->pat_uc, i915_gem_object_is_lmem(dst), sizes[i]); @@ -995,12 +994,10 @@ static int perf_copy_blt(void *arg) err = __perf_copy_blt(gt->migrate.context, src->mm.pages->sgl, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), + gt->i915->pat_uc, i915_gem_object_is_lmem(src), dst->mm.pages->sgl, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), + gt->i915->pat_uc, i915_gem_object_is_lmem(dst), sz); diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c index 79aa6ac66ad2..327dc9294e0f 100644 --- a/drivers/gpu/drm/i915/gt/selftest_reset.c +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c @@ -84,11 +84,8 @@ __igt_reset_stolen(struct intel_gt *gt, void __iomem *s; void *in; - ggtt->vm.insert_page(&ggtt->vm, dma, - ggtt->error_capture.start, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), - 0); + ggtt->vm.insert_page(&ggtt->vm, dma, ggtt->error_capture.start, + gt->i915->pat_uc, 0); mb(); s = io_mapping_map_wc(&ggtt->iomap, @@ -127,11 +124,8 @@ __igt_reset_stolen(struct intel_gt *gt, void *in; u32 x; - ggtt->vm.insert_page(&ggtt->vm, dma, - ggtt->error_capture.start, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), - 0); + ggtt->vm.insert_page(&ggtt->vm, dma, ggtt->error_capture.start, + gt->i915->pat_uc, 0); mb(); s = io_mapping_map_wc(&ggtt->iomap, diff --git a/drivers/gpu/drm/i915/gt/selftest_tlb.c b/drivers/gpu/drm/i915/gt/selftest_tlb.c index 3bd6b540257b..6049f01be219 100644 --- a/drivers/gpu/drm/i915/gt/selftest_tlb.c +++ b/drivers/gpu/drm/i915/gt/selftest_tlb.c @@ -36,8 +36,6 @@ pte_tlbinv(struct intel_context *ce, u64 length, struct rnd_state *prng) { - const unsigned int pat_index = - i915_gem_get_pat_index(ce->vm->i915, I915_CACHE_NONE); struct drm_i915_gem_object *batch; struct drm_mm_node vb_node; struct i915_request *rq; @@ -157,7 +155,8 @@ pte_tlbinv(struct intel_context *ce, /* Flip the PTE between A and B */ if (i915_gem_object_is_lmem(vb->obj)) pte_flags |= PTE_LM; - ce->vm->insert_entries(ce->vm, &vb_res, pat_index, pte_flags); + ce->vm->insert_entries(ce->vm, &vb_res, ce->vm->i915->pat_uc, + pte_flags); /* Flush the PTE update to concurrent HW */ tlbinv(ce->vm, addr & -length, length); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index 7aadad5639c3..8b7aa8c5a99d 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -1053,14 +1053,10 @@ static void uc_fw_bind_ggtt(struct intel_uc_fw *uc_fw) if (ggtt->vm.raw_insert_entries) ggtt->vm.raw_insert_entries(&ggtt->vm, vma_res, - i915_gem_get_pat_index(ggtt->vm.i915, - I915_CACHE_NONE), - pte_flags); + ggtt->vm.i915->pat_uc, pte_flags); else ggtt->vm.insert_entries(&ggtt->vm, vma_res, - i915_gem_get_pat_index(ggtt->vm.i915, - I915_CACHE_NONE), - pte_flags); + ggtt->vm.i915->pat_uc, pte_flags); } static void uc_fw_unbind_ggtt(struct intel_uc_fw *uc_fw) diff --git a/drivers/gpu/drm/i915/i915_cache.c b/drivers/gpu/drm/i915/i915_cache.c new file mode 100644 index 000000000000..06eb5933c719 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_cache.c @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2023 Intel Corporation + */ + +#include "i915_cache.h" +#include "i915_drv.h" + +void i915_cache_init(struct drm_i915_private *i915) +{ + i915->pat_uc = i915_gem_get_pat_index(i915, I915_CACHE_NONE); + drm_info(&i915->drm, "Using PAT index %u for uncached access\n", + i915->pat_uc); + + i915->pat_wb = i915_gem_get_pat_index(i915, I915_CACHE_LLC); + drm_info(&i915->drm, "Using PAT index %u for write-back access\n", + i915->pat_wb); +} diff --git a/drivers/gpu/drm/i915/i915_cache.h b/drivers/gpu/drm/i915/i915_cache.h new file mode 100644 index 000000000000..cb68936fb8a2 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_cache.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#ifndef __I915_CACHE_H__ +#define __I915_CACHE_H__ + +struct drm_i915_private; + +void i915_cache_init(struct drm_i915_private *i915); + +#endif /* __I915_CACHE_H__ */ diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index 294b022de22b..bb2223cc3470 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -80,6 +80,7 @@ #include "soc/intel_dram.h" #include "soc/intel_gmch.h" +#include "i915_cache.h" #include "i915_debugfs.h" #include "i915_driver.h" #include "i915_drm_client.h" @@ -240,6 +241,8 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv) i915_memcpy_init_early(dev_priv); intel_runtime_pm_init_early(&dev_priv->runtime_pm); + i915_cache_init(dev_priv); + ret = i915_workqueues_init(dev_priv); if (ret < 0) return ret; diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 682ef2b5c7d5..f5c591a762df 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -250,6 +250,8 @@ struct drm_i915_private { unsigned int hpll_freq; unsigned int czclk_freq; + unsigned int pat_uc, pat_wb; + /** * wq - Driver workqueue for GEM. * diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 1f65bb33dd21..896aa48ed089 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -422,9 +422,7 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj, ggtt->vm.insert_page(&ggtt->vm, i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT), - node.start, - i915_gem_get_pat_index(i915, - I915_CACHE_NONE), 0); + node.start, i915->pat_uc, 0); } else { page_base += offset & PAGE_MASK; } @@ -603,9 +601,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, ggtt->vm.insert_page(&ggtt->vm, i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT), - node.start, - i915_gem_get_pat_index(i915, - I915_CACHE_NONE), 0); + node.start, i915->pat_uc, 0); wmb(); /* flush modifications to the GGTT (insert_page) */ } else { page_base += offset & PAGE_MASK; diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 4008bb09fdb5..31975a79730c 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1124,14 +1124,10 @@ i915_vma_coredump_create(const struct intel_gt *gt, mutex_lock(&ggtt->error_mutex); if (ggtt->vm.raw_insert_page) ggtt->vm.raw_insert_page(&ggtt->vm, dma, slot, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), - 0); + gt->i915->pat_uc, 0); else ggtt->vm.insert_page(&ggtt->vm, dma, slot, - i915_gem_get_pat_index(gt->i915, - I915_CACHE_NONE), - 0); + gt->i915->pat_uc, 0); mb(); s = io_mapping_map_wc(&ggtt->iomap, slot, PAGE_SIZE); diff --git a/drivers/gpu/drm/i915/selftests/i915_gem.c b/drivers/gpu/drm/i915/selftests/i915_gem.c index 61da4ed9d521..e620f73793a5 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem.c @@ -57,10 +57,7 @@ static void trash_stolen(struct drm_i915_private *i915) u32 __iomem *s; int x; - ggtt->vm.insert_page(&ggtt->vm, dma, slot, - i915_gem_get_pat_index(i915, - I915_CACHE_NONE), - 0); + ggtt->vm.insert_page(&ggtt->vm, dma, slot, i915->pat_uc, 0); s = io_mapping_map_atomic_wc(&ggtt->iomap, slot); for (x = 0; x < PAGE_SIZE / sizeof(u32); x++) { diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c index f8fe3681c3dc..f910ec9b6d2b 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c @@ -246,7 +246,7 @@ static int igt_evict_for_cache_color(void *arg) struct drm_mm_node target = { .start = I915_GTT_PAGE_SIZE * 2, .size = I915_GTT_PAGE_SIZE, - .color = i915_gem_get_pat_index(gt->i915, I915_CACHE_LLC), + .color = gt->i915->pat_wb, }; struct drm_i915_gem_object *obj; struct i915_vma *vma; @@ -309,7 +309,7 @@ static int igt_evict_for_cache_color(void *arg) /* Attempt to remove the first *pinned* vma, by removing the (empty) * neighbour -- this should fail. */ - target.color = i915_gem_get_pat_index(gt->i915, I915_CACHE_L3_LLC); + target.color = gt->i915->pat_uc; mutex_lock(&ggtt->vm.mutex); err = i915_gem_evict_for_node(&ggtt->vm, NULL, &target, 0); diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index 5c397a2df70e..c96b7f7d7853 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -135,7 +135,7 @@ fake_dma_object(struct drm_i915_private *i915, u64 size) obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; - obj->pat_index = i915_gem_get_pat_index(i915, I915_CACHE_NONE); + obj->pat_index = i915->pat_uc; /* Preallocate the "backing storage" */ if (i915_gem_object_pin_pages_unlocked(obj)) @@ -358,9 +358,7 @@ static int lowlevel_hole(struct i915_address_space *vm, mock_vma_res->start = addr; with_intel_runtime_pm(vm->gt->uncore->rpm, wakeref) - vm->insert_entries(vm, mock_vma_res, - i915_gem_get_pat_index(vm->i915, - I915_CACHE_NONE), + vm->insert_entries(vm, mock_vma_res, vm->i915->pat_uc, 0); } count = n; @@ -1379,10 +1377,7 @@ static int igt_ggtt_page(void *arg) ggtt->vm.insert_page(&ggtt->vm, i915_gem_object_get_dma_address(obj, 0), - offset, - i915_gem_get_pat_index(i915, - I915_CACHE_NONE), - 0); + offset, i915->pat_uc, 0); } order = i915_random_order(count, &prng); diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index d985d9bae2e8..b82fe0ef8cd7 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -1070,9 +1070,7 @@ static int igt_lmem_write_cpu(void *arg) /* Put the pages into a known state -- from the gpu for added fun */ intel_engine_pm_get(engine); err = intel_context_migrate_clear(engine->gt->migrate.context, NULL, - obj->mm.pages->sgl, - i915_gem_get_pat_index(i915, - I915_CACHE_NONE), + obj->mm.pages->sgl, i915->pat_uc, true, 0xdeadbeaf, &rq); if (rq) { dma_resv_add_fence(obj->base.resv, &rq->fence, diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index da0b269606c5..1d1a457e2aee 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -181,6 +181,8 @@ struct drm_i915_private *mock_gem_device(void) /* Set up device info and initial runtime info. */ intel_device_info_driver_create(i915, pdev->device, &mock_info); + i915_cache_init(i915); + dev_pm_domain_set(&pdev->dev, &pm_domain); pm_runtime_enable(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev); From patchwork Thu Jul 27 14:55:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8C8CC001DC for ; Thu, 27 Jul 2023 14:55:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B698F10E5AB; Thu, 27 Jul 2023 14:55:26 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 19FA810E5A6; Thu, 27 Jul 2023 14:55:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469724; x=1722005724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=m+mC4xmMFddeehuXIv5a4rjAzZF9/o7Q4U9kXemInrw=; b=GDgBSG5UOEuv4G30Ryn7+zDlz66MQ310o4v8sO5zHO7T5Z6pHFAIdUOz jLh8A5z2gNFLRLKNrxlKdDdgrYlhF7rtdo/fLtx3x/rwuMaUVsRnFSguY lpDjhwpEEOAxYsNBmaOE/3rWIx5pzab4P+AsTV8xu9KWeLi93Hu4de9vr ikcAZM145kTeLIGta12jlyUwjLVkH3gcRlTA/kPLgraSUssyjPKckWaBQ WdQ9/UBKxTQjNhccfDEijvP96tUE8hUYcLSt/bmrVHKJ0pMYWDTwNHBLT ZIHumPKyW92tHXyz/Urg92AN83f/WmVpvP1okfiXQjPz6Og9fzX2Rkt+I g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268405" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268405" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433734" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:23 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 4/8] drm/i915: Refactor PAT/object cache handling Date: Thu, 27 Jul 2023 15:55:00 +0100 Message-Id: <20230727145504.1919316-5-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper , Chris Wilson , Andi Shyti , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Commit 9275277d5324 ("drm/i915: use pat_index instead of cache_level") has introduced PAT indices to i915 internal APIs, partially replacing the usage of driver internal cache_level, but has also added a few sub- optimal design decisions which this patch tries to improve upon. Principal change here is to invert the per platform cache level to PAT index table which was added by the referenced commit, and by doing so enable i915 to understand the cache mode between PAT indices, changing them from opaque to transparent. Once we have the inverted table we are able to remove the hidden false "return true" from i915_gem_object_has_cache_level and make the involved code path clearer. To achieve this we replace the enum i915_cache_level with i915_cache_t, composed of a more detailed representation of each cache mode (base mode plus flags). In this way we are able to express the differences between different write-back mode coherency settings on Meteorlake, which in turn enables us to map the i915 "cached" mode to the correct Meteorlake PAT index. We can also replace the platform dependent cache mode to string code in debugfs and elsewhere by the single implementation based on i915_cache_t. v2: * Fix PAT-to-cache-mode table for PVC. (Fei) * Cache display caching mode too. (Fei) * Improve and document criteria in i915_gem_object_can_bypass_llc() (Matt) v3: * Checkpath issues. * Cache mode flags check fixed. v4: * Fix intel_device_info->cache_modes array size. (Matt) * Boolean cache mode and flags query. (Matt) * Reduce number of cache macros with some macro magic. * One more checkpatch fix. * Tweak tables to show legacy and Gen12 WB is fully coherent. Signed-off-by: Tvrtko Ursulin References: 9275277d5324 ("drm/i915: use pat_index instead of cache_level") Cc: Chris Wilson Cc: Fei Yang Cc: Andi Shyti Cc: Matt Roper --- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 60 +++++---- drivers/gpu/drm/i915/gem/i915_gem_domain.h | 5 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 +- drivers/gpu/drm/i915/gem/i915_gem_object.c | 117 ++++++++++-------- drivers/gpu/drm/i915/gem/i915_gem_object.h | 11 +- .../gpu/drm/i915/gem/i915_gem_object_types.h | 116 +---------------- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 8 +- drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 20 +-- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 2 +- .../drm/i915/gem/selftests/huge_gem_object.c | 2 +- .../gpu/drm/i915/gem/selftests/huge_pages.c | 3 +- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 10 +- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 +- drivers/gpu/drm/i915/gt/intel_ggtt.c | 25 ++-- drivers/gpu/drm/i915/gt/intel_ggtt_gmch.c | 4 +- drivers/gpu/drm/i915/gt/intel_gtt.c | 2 +- drivers/gpu/drm/i915/gt/intel_gtt.h | 3 +- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 6 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 4 +- drivers/gpu/drm/i915/gt/intel_timeline.c | 2 +- drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 2 +- .../gpu/drm/i915/gt/selftest_workarounds.c | 2 +- drivers/gpu/drm/i915/i915_cache.c | 89 +++++++++++-- drivers/gpu/drm/i915/i915_cache.h | 70 ++++++++++- drivers/gpu/drm/i915/i915_debugfs.c | 53 ++------ drivers/gpu/drm/i915/i915_driver.c | 4 +- drivers/gpu/drm/i915/i915_gem.c | 13 -- drivers/gpu/drm/i915/i915_pci.c | 84 +++++++------ drivers/gpu/drm/i915/i915_perf.c | 2 +- drivers/gpu/drm/i915/intel_device_info.h | 6 +- .../gpu/drm/i915/selftests/i915_gem_evict.c | 4 +- drivers/gpu/drm/i915/selftests/igt_spinner.c | 2 +- .../gpu/drm/i915/selftests/mock_gem_device.c | 14 +-- 36 files changed, 391 insertions(+), 367 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index 57db9c581bf6..c15f83de33af 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -8,6 +8,7 @@ #include "display/intel_frontbuffer.h" #include "gt/intel_gt.h" +#include "i915_cache.h" #include "i915_drv.h" #include "i915_gem_clflush.h" #include "i915_gem_domain.h" @@ -41,14 +42,17 @@ static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj) return false; /* - * For objects created by userspace through GEM_CREATE with pat_index - * set by set_pat extension, i915_gem_object_has_cache_level() will - * always return true, because the coherency of such object is managed - * by userspace. Othereise the call here would fall back to checking - * whether the object is un-cached or write-through. + * Always flush cache for UMD objects with PAT index set. */ - return !(i915_gem_object_has_cache_level(obj, I915_CACHE_NONE) || - i915_gem_object_has_cache_level(obj, I915_CACHE_WT)); + if (obj->pat_set_by_user) + return true; + + /* + * Fully coherent cached access may end up with data in the CPU cache + * which hasn't hit memory yet. + */ + return i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_WB) && + i915_gem_object_has_cache_flag(obj, I915_CACHE_FLAG_COH2W); } bool i915_gem_cpu_write_needs_clflush(struct drm_i915_gem_object *obj) @@ -268,7 +272,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write) /** * i915_gem_object_set_cache_level - Changes the cache-level of an object across all VMA. * @obj: object to act on - * @cache_level: new cache level to set for the object + * @cache: new caching mode to set for the object * * After this function returns, the object will be in the new cache-level * across all GTT and the contents of the backing storage will be coherent, @@ -281,18 +285,28 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write) * that all direct access to the scanout remains coherent. */ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, - enum i915_cache_level cache_level) + i915_cache_t cache) { - int ret; + struct drm_i915_private *i915 = to_i915(obj->base.dev); + int pat, ret; - /* - * For objects created by userspace through GEM_CREATE with pat_index - * set by set_pat extension, simply return 0 here without touching - * the cache setting, because such objects should have an immutable - * cache setting by desgin and always managed by userspace. - */ - if (i915_gem_object_has_cache_level(obj, cache_level)) + pat = i915_cache_find_pat(i915, cache); + if (pat < 0) { + char buf[I915_CACHE_NAME_LEN]; + + i915_cache_print(buf, sizeof(buf), NULL, cache); + drm_err_ratelimited(&i915->drm, + "Attempting to use unknown caching mode %s!\n", + buf); + + return -EINVAL; + } else if (pat == obj->pat_index) { return 0; + } else if (obj->pat_set_by_user) { + drm_notice_once(&i915->drm, + "Attempting to change caching mode on an object with fixed PAT!\n"); + return -EINVAL; + } ret = i915_gem_object_wait(obj, I915_WAIT_INTERRUPTIBLE | @@ -302,7 +316,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, return ret; /* Always invalidate stale cachelines */ - i915_gem_object_set_cache_coherency(obj, cache_level); + i915_gem_object_set_pat_index(obj, pat); obj->cache_dirty = true; /* The cache-level will be applied when each vma is rebound. */ @@ -337,10 +351,10 @@ int i915_gem_get_caching_ioctl(struct drm_device *dev, void *data, goto out; } - if (i915_gem_object_has_cache_level(obj, I915_CACHE_LLC) || - i915_gem_object_has_cache_level(obj, I915_CACHE_L3_LLC)) + if (i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_WB) && + i915_gem_object_has_cache_flag(obj, I915_CACHE_FLAG_COH2W)) args->caching = I915_CACHING_CACHED; - else if (i915_gem_object_has_cache_level(obj, I915_CACHE_WT)) + else if (i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_WT)) args->caching = I915_CACHING_DISPLAY; else args->caching = I915_CACHING_NONE; @@ -355,7 +369,7 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data, struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_caching *args = data; struct drm_i915_gem_object *obj; - enum i915_cache_level level; + i915_cache_t level; int ret = 0; if (IS_DGFX(i915)) @@ -378,7 +392,7 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data, if (!HAS_LLC(i915) && !HAS_SNOOP(i915)) return -ENODEV; - level = I915_CACHE_LLC; + level = I915_CACHE_CACHED; break; case I915_CACHING_DISPLAY: level = HAS_WT(i915) ? I915_CACHE_WT : I915_CACHE_NONE; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.h b/drivers/gpu/drm/i915/gem/i915_gem_domain.h index 9622df962bfc..6da5c351f6fd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.h @@ -6,10 +6,11 @@ #ifndef __I915_GEM_DOMAIN_H__ #define __I915_GEM_DOMAIN_H__ +#include "i915_cache.h" + struct drm_i915_gem_object; -enum i915_cache_level; int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, - enum i915_cache_level cache_level); + i915_cache_t cache); #endif /* __I915_GEM_DOMAIN_H__ */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 0a1d40220020..9d6e49c8a4c6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -648,7 +648,8 @@ static inline int use_cpu_reloc(const struct reloc_cache *cache, */ return (cache->has_llc || obj->cache_dirty || - !i915_gem_object_has_cache_level(obj, I915_CACHE_NONE)); + !(obj->pat_set_by_user || + i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_UC))); } static int eb_reserve_vma(struct i915_execbuffer *eb, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index 6bc26b4b06b8..88c360c3d6a3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -170,7 +170,7 @@ __i915_gem_object_create_internal(struct drm_i915_private *i915, obj->read_domains = I915_GEM_DOMAIN_CPU; obj->write_domain = I915_GEM_DOMAIN_CPU; - cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE; + cache_level = HAS_LLC(i915) ? I915_CACHE_CACHED : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); return obj; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index aa4d842d4c5a..cd7f8ded0d6f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -382,7 +382,6 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf) goto err_reset; } - /* Access to snoopable pages through the GTT is incoherent. */ /* * For objects created by userspace through GEM_CREATE with pat_index * set by set_pat extension, coherency is managed by userspace, make @@ -391,7 +390,8 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf) * objects. Otherwise this helper function would fall back to checking * whether the object is un-cached. */ - if (!(i915_gem_object_has_cache_level(obj, I915_CACHE_NONE) || + if (!((obj->pat_set_by_user || + i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_UC)) || HAS_LLC(i915))) { ret = -EFAULT; goto err_unpin; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 3dc4fbb67d2b..ec1f0be43d0d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -45,33 +45,6 @@ static struct kmem_cache *slab_objects; static const struct drm_gem_object_funcs i915_gem_object_funcs; -unsigned int i915_gem_get_pat_index(struct drm_i915_private *i915, - enum i915_cache_level level) -{ - if (drm_WARN_ON(&i915->drm, level >= I915_MAX_CACHE_LEVEL)) - return 0; - - return INTEL_INFO(i915)->cachelevel_to_pat[level]; -} - -bool i915_gem_object_has_cache_level(const struct drm_i915_gem_object *obj, - enum i915_cache_level lvl) -{ - /* - * In case the pat_index is set by user space, this kernel mode - * driver should leave the coherency to be managed by user space, - * simply return true here. - */ - if (obj->pat_set_by_user) - return true; - - /* - * Otherwise the pat_index should have been converted from cache_level - * so that the following comparison is valid. - */ - return obj->pat_index == i915_gem_get_pat_index(obj_to_i915(obj), lvl); -} - struct drm_i915_gem_object *i915_gem_object_alloc(void) { struct drm_i915_gem_object *obj; @@ -144,30 +117,72 @@ void __i915_gem_object_fini(struct drm_i915_gem_object *obj) dma_resv_fini(&obj->base._resv); } +bool i915_gem_object_has_cache_mode(const struct drm_i915_gem_object *obj, + enum i915_cache_mode mode) +{ + struct drm_i915_private *i915 = obj_to_i915(obj); + i915_cache_t cache = INTEL_INFO(i915)->cache_modes[obj->pat_index]; + + return I915_CACHE_MODE(cache) == mode; +} + +bool i915_gem_object_has_cache_flag(const struct drm_i915_gem_object *obj, + unsigned int flag) +{ + struct drm_i915_private *i915 = obj_to_i915(obj); + i915_cache_t cache = INTEL_INFO(i915)->cache_modes[obj->pat_index]; + + return I915_CACHE_FLAGS(cache) & flag; +} + +static void __i915_gem_object_update_coherency(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *i915 = obj_to_i915(obj); + i915_cache_t cache = INTEL_INFO(i915)->cache_modes[obj->pat_index]; + const unsigned int flags = I915_CACHE_FLAGS(cache); + const unsigned int mode = I915_CACHE_MODE(cache); + + if (mode == I915_CACHE_MODE_WC || + mode == I915_CACHE_MODE_WT || + (mode == I915_CACHE_MODE_WB && (flags & I915_CACHE_FLAG_COH2W))) + obj->cache_coherent = I915_BO_CACHE_COHERENT_FOR_READ | + I915_BO_CACHE_COHERENT_FOR_WRITE; + else if (HAS_LLC(i915)) + obj->cache_coherent = I915_BO_CACHE_COHERENT_FOR_READ; + else + obj->cache_coherent = 0; + + obj->cache_dirty = + !(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE) && + !IS_DGFX(i915); +} + /** * i915_gem_object_set_cache_coherency - Mark up the object's coherency levels - * for a given cache_level + * for a given caching mode * @obj: #drm_i915_gem_object - * @cache_level: cache level + * @cache: cache mode */ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, - unsigned int cache_level) + i915_cache_t cache) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct drm_i915_private *i915 = obj_to_i915(obj); + int found; - obj->pat_index = i915_gem_get_pat_index(i915, cache_level); + found = i915_cache_find_pat(i915, cache); + if (found < 0) { + char buf[I915_CACHE_NAME_LEN]; - if (cache_level != I915_CACHE_NONE) - obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ | - I915_BO_CACHE_COHERENT_FOR_WRITE); - else if (HAS_LLC(i915)) - obj->cache_coherent = I915_BO_CACHE_COHERENT_FOR_READ; - else - obj->cache_coherent = 0; + i915_cache_print(buf, sizeof(buf), NULL, cache); + drm_err_ratelimited(&i915->drm, "Unknown cache mode %s!\n", + buf); - obj->cache_dirty = - !(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE) && - !IS_DGFX(i915); + found = i915->pat_uc; + } + + obj->pat_index = found; + + __i915_gem_object_update_coherency(obj); } /** @@ -181,24 +196,18 @@ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, void i915_gem_object_set_pat_index(struct drm_i915_gem_object *obj, unsigned int pat_index) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct drm_i915_private *i915 = obj_to_i915(obj); if (obj->pat_index == pat_index) return; + if (drm_WARN_ON_ONCE(&i915->drm, + pat_index > INTEL_INFO(i915)->max_pat_index)) + return; + obj->pat_index = pat_index; - if (pat_index != i915_gem_get_pat_index(i915, I915_CACHE_NONE)) - obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ | - I915_BO_CACHE_COHERENT_FOR_WRITE); - else if (HAS_LLC(i915)) - obj->cache_coherent = I915_BO_CACHE_COHERENT_FOR_READ; - else - obj->cache_coherent = 0; - - obj->cache_dirty = - !(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE) && - !IS_DGFX(i915); + __i915_gem_object_update_coherency(obj); } bool i915_gem_object_can_bypass_llc(struct drm_i915_gem_object *obj) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 884a17275b3a..a5d4ee19d9be 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -13,6 +13,7 @@ #include "display/intel_frontbuffer.h" #include "intel_memory_region.h" +#include "i915_cache.h" #include "i915_gem_object_types.h" #include "i915_gem_gtt.h" #include "i915_gem_ww.h" @@ -32,10 +33,6 @@ static inline bool i915_gem_object_size_2big(u64 size) return false; } -unsigned int i915_gem_get_pat_index(struct drm_i915_private *i915, - enum i915_cache_level level); -bool i915_gem_object_has_cache_level(const struct drm_i915_gem_object *obj, - enum i915_cache_level lvl); void i915_gem_init__objects(struct drm_i915_private *i915); void i915_objects_module_exit(void); @@ -764,8 +761,12 @@ int i915_gem_object_wait_moving_fence(struct drm_i915_gem_object *obj, bool intr); bool i915_gem_object_has_unknown_state(struct drm_i915_gem_object *obj); +bool i915_gem_object_has_cache_mode(const struct drm_i915_gem_object *obj, + enum i915_cache_mode mode); +bool i915_gem_object_has_cache_flag(const struct drm_i915_gem_object *obj, + unsigned int flag); void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, - unsigned int cache_level); + i915_cache_t cache); void i915_gem_object_set_pat_index(struct drm_i915_gem_object *obj, unsigned int pat_index); bool i915_gem_object_can_bypass_llc(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 8de2b91b3edf..6790e13ad262 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -14,6 +14,7 @@ #include #include "i915_active.h" +#include "i915_cache.h" #include "i915_selftest.h" #include "i915_vma_resource.h" @@ -116,93 +117,6 @@ struct drm_i915_gem_object_ops { const char *name; /* friendly name for debug, e.g. lockdep classes */ }; -/** - * enum i915_cache_level - The supported GTT caching values for system memory - * pages. - * - * These translate to some special GTT PTE bits when binding pages into some - * address space. It also determines whether an object, or rather its pages are - * coherent with the GPU, when also reading or writing through the CPU cache - * with those pages. - * - * Userspace can also control this through struct drm_i915_gem_caching. - */ -enum i915_cache_level { - /** - * @I915_CACHE_NONE: - * - * GPU access is not coherent with the CPU cache. If the cache is dirty - * and we need the underlying pages to be coherent with some later GPU - * access then we need to manually flush the pages. - * - * On shared LLC platforms reads and writes through the CPU cache are - * still coherent even with this setting. See also - * &drm_i915_gem_object.cache_coherent for more details. Due to this we - * should only ever use uncached for scanout surfaces, otherwise we end - * up over-flushing in some places. - * - * This is the default on non-LLC platforms. - */ - I915_CACHE_NONE = 0, - /** - * @I915_CACHE_LLC: - * - * GPU access is coherent with the CPU cache. If the cache is dirty, - * then the GPU will ensure that access remains coherent, when both - * reading and writing through the CPU cache. GPU writes can dirty the - * CPU cache. - * - * Not used for scanout surfaces. - * - * Applies to both platforms with shared LLC(HAS_LLC), and snooping - * based platforms(HAS_SNOOP). - * - * This is the default on shared LLC platforms. The only exception is - * scanout objects, where the display engine is not coherent with the - * CPU cache. For such objects I915_CACHE_NONE or I915_CACHE_WT is - * automatically applied by the kernel in pin_for_display, if userspace - * has not done so already. - */ - I915_CACHE_LLC, - /** - * @I915_CACHE_L3_LLC: - * - * Explicitly enable the Gfx L3 cache, with coherent LLC. - * - * The Gfx L3 sits between the domain specific caches, e.g - * sampler/render caches, and the larger LLC. LLC is coherent with the - * GPU, but L3 is only visible to the GPU, so likely needs to be flushed - * when the workload completes. - * - * Not used for scanout surfaces. - * - * Only exposed on some gen7 + GGTT. More recent hardware has dropped - * this explicit setting, where it should now be enabled by default. - */ - I915_CACHE_L3_LLC, - /** - * @I915_CACHE_WT: - * - * Write-through. Used for scanout surfaces. - * - * The GPU can utilise the caches, while still having the display engine - * be coherent with GPU writes, as a result we don't need to flush the - * CPU caches when moving out of the render domain. This is the default - * setting chosen by the kernel, if supported by the HW, otherwise we - * fallback to I915_CACHE_NONE. On the CPU side writes through the CPU - * cache still need to be flushed, to remain coherent with the display - * engine. - */ - I915_CACHE_WT, - /** - * @I915_MAX_CACHE_LEVEL: - * - * Mark the last entry in the enum. Used for defining cachelevel_to_pat - * array for cache_level to pat translation table. - */ - I915_MAX_CACHE_LEVEL, -}; - enum i915_map_type { I915_MAP_WB = 0, I915_MAP_WC, @@ -403,16 +317,6 @@ struct drm_i915_gem_object { /** * @cache_coherent: * - * Note: with the change above which replaced @cache_level with pat_index, - * the use of @cache_coherent is limited to the objects created by kernel - * or by userspace without pat index specified. - * Check for @pat_set_by_user to find out if an object has pat index set - * by userspace. The ioctl's to change cache settings have also been - * disabled for the objects with pat index set by userspace. Please don't - * assume @cache_coherent having the flags set as describe here. A helper - * function i915_gem_object_has_cache_level() provides one way to bypass - * the use of this field. - * * Track whether the pages are coherent with the GPU if reading or * writing through the CPU caches. The largely depends on the * @cache_level setting. @@ -447,7 +351,7 @@ struct drm_i915_gem_object { * flushing the surface just before doing the scanout. This does mean * we might unnecessarily flush non-scanout objects in some places, but * the default assumption is that all normal objects should be using - * I915_CACHE_LLC, at least on platforms with the shared LLC. + * I915_CACHE_CACHED, at least on platforms with the shared LLC. * * Supported values: * @@ -486,16 +390,6 @@ struct drm_i915_gem_object { /** * @cache_dirty: * - * Note: with the change above which replaced cache_level with pat_index, - * the use of @cache_dirty is limited to the objects created by kernel - * or by userspace without pat index specified. - * Check for @pat_set_by_user to find out if an object has pat index set - * by userspace. The ioctl's to change cache settings have also been - * disabled for the objects with pat_index set by userspace. Please don't - * assume @cache_dirty is set as describe here. Also see helper function - * i915_gem_object_has_cache_level() for possible ways to bypass the use - * of this field. - * * Track if we are we dirty with writes through the CPU cache for this * object. As a result reading directly from main memory might yield * stale data. @@ -531,9 +425,9 @@ struct drm_i915_gem_object { * * 1. All userspace objects, by default, have @cache_level set as * I915_CACHE_NONE. The only exception is userptr objects, where we - * instead force I915_CACHE_LLC, but we also don't allow userspace to - * ever change the @cache_level for such objects. Another special case - * is dma-buf, which doesn't rely on @cache_dirty, but there we + * instead force I915_CACHE_CACHED, but we also don't allow userspace + * to ever change the @cache_level for such objects. Another special + * case is dma-buf, which doesn't rely on @cache_dirty, but there we * always do a forced flush when acquiring the pages, if there is a * chance that the pages can be read directly from main memory with * the GPU. diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 8f1633c3fb93..aba908f0349f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -584,7 +584,7 @@ static int shmem_object_init(struct intel_memory_region *mem, static struct lock_class_key lock_class; struct drm_i915_private *i915 = mem->i915; struct address_space *mapping; - unsigned int cache_level; + i915_cache_t cache; gfp_t mask; int ret; @@ -628,11 +628,11 @@ static int shmem_object_init(struct intel_memory_region *mem, * However, we maintain the display planes as UC, and so * need to rebind when first used as such. */ - cache_level = I915_CACHE_LLC; + cache = I915_CACHE_CACHED; else - cache_level = I915_CACHE_NONE; + cache = I915_CACHE_NONE; - i915_gem_object_set_cache_coherency(obj, cache_level); + i915_gem_object_set_cache_coherency(obj, cache); i915_gem_object_init_memory_region(obj, mem); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 1c8eb806b7d3..cc907a1f1c53 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -691,7 +691,7 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem, obj->stolen = stolen; obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; - cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE; + cache_level = HAS_LLC(mem->i915) ? I915_CACHE_CACHED : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); if (WARN_ON(!i915_gem_object_trylock(obj, NULL))) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 6bd6c239f4ac..107176d1757b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -48,14 +48,14 @@ void i915_ttm_migrate_set_ban_memcpy(bool ban) } #endif -static enum i915_cache_level -i915_ttm_cache_level(struct drm_i915_private *i915, struct ttm_resource *res, - struct ttm_tt *ttm) +static i915_cache_t +i915_ttm_cache(struct drm_i915_private *i915, struct ttm_resource *res, + struct ttm_tt *ttm) { return ((HAS_LLC(i915) || HAS_SNOOP(i915)) && !i915_ttm_gtt_binds_lmem(res) && - ttm->caching == ttm_cached) ? I915_CACHE_LLC : - I915_CACHE_NONE; + ttm->caching == ttm_cached) ? I915_CACHE_CACHED : + I915_CACHE_NONE; } static unsigned int @@ -112,8 +112,8 @@ void i915_ttm_adjust_domains_after_move(struct drm_i915_gem_object *obj) void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) { struct ttm_buffer_object *bo = i915_gem_to_ttm(obj); - unsigned int cache_level; unsigned int mem_flags; + i915_cache_t cache; unsigned int i; int mem_type; @@ -126,13 +126,13 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) if (!bo->resource) { mem_flags = I915_BO_FLAG_STRUCT_PAGE; mem_type = I915_PL_SYSTEM; - cache_level = I915_CACHE_NONE; + cache = I915_CACHE_NONE; } else { mem_flags = i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : I915_BO_FLAG_STRUCT_PAGE; mem_type = bo->resource->mem_type; - cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), bo->resource, - bo->ttm); + cache = i915_ttm_cache(to_i915(bo->base.dev), bo->resource, + bo->ttm); } /* @@ -157,7 +157,7 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) obj->mem_flags &= ~(I915_BO_FLAG_STRUCT_PAGE | I915_BO_FLAG_IOMEM); obj->mem_flags |= mem_flags; - i915_gem_object_set_cache_coherency(obj, cache_level); + i915_gem_object_set_cache_coherency(obj, cache); } /** diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 1d3ebdf4069b..5d2891981bd4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -553,7 +553,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, obj->mem_flags = I915_BO_FLAG_STRUCT_PAGE; obj->read_domains = I915_GEM_DOMAIN_CPU; obj->write_domain = I915_GEM_DOMAIN_CPU; - i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(obj, I915_CACHE_CACHED); obj->userptr.ptr = args->user_ptr; obj->userptr.notifier_seq = ULONG_MAX; diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c index bac957755068..77d04be5e9d7 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c @@ -123,7 +123,7 @@ huge_gem_object(struct drm_i915_private *i915, obj->read_domains = I915_GEM_DOMAIN_CPU; obj->write_domain = I915_GEM_DOMAIN_CPU; - cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE; + cache_level = HAS_LLC(i915) ? I915_CACHE_CACHED : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); obj->scratch = phys_size; diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 6bddd733d796..6ca5b9dbc414 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -200,9 +200,10 @@ huge_pages_object(struct drm_i915_private *i915, obj->write_domain = I915_GEM_DOMAIN_CPU; obj->read_domains = I915_GEM_DOMAIN_CPU; - cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE; + cache_level = HAS_LLC(i915) ? I915_CACHE_CACHED : I915_CACHE_NONE; i915_gem_object_set_cache_coherency(obj, cache_level); + obj->mm.page_mask = page_mask; return obj; diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 675f71f06e89..3c93a73cf6b1 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -16,11 +16,11 @@ #include "intel_gtt.h" static u64 gen8_pde_encode(const dma_addr_t addr, - const enum i915_cache_level level) + const enum i915_cache_mode cache_mode) { u64 pde = addr | GEN8_PAGE_PRESENT | GEN8_PAGE_RW; - if (level != I915_CACHE_NONE) + if (cache_mode != I915_CACHE_MODE_UC) pde |= PPAT_CACHED_PDE; else pde |= PPAT_UNCACHED; @@ -43,10 +43,10 @@ static u64 gen8_pte_encode(dma_addr_t addr, * See translation table defined by LEGACY_CACHELEVEL. */ switch (pat_index) { - case I915_CACHE_NONE: + case I915_CACHE_MODE_UC: pte |= PPAT_UNCACHED; break; - case I915_CACHE_WT: + case I915_CACHE_MODE_WT: pte |= PPAT_DISPLAY_ELLC; break; default: @@ -893,7 +893,7 @@ static int gen8_init_scratch(struct i915_address_space *vm) } fill_px(obj, vm->scratch[i - 1]->encode); - obj->encode = gen8_pde_encode(px_dma(obj), I915_CACHE_NONE); + obj->encode = gen8_pde_encode(px_dma(obj), I915_CACHE_MODE_UC); vm->scratch[i] = obj; } diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index ee15486fed0d..f1e59e512d14 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -1103,7 +1103,7 @@ static int init_status_page(struct intel_engine_cs *engine) return PTR_ERR(obj); } - i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(obj, I915_CACHE_CACHED); vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL); if (IS_ERR(vma)) { diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index fca61ddca8ad..ab5f654e7557 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -1011,11 +1011,6 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt) return ggtt_probe_common(ggtt, size); } -/* - * For pre-gen8 platforms pat_index is the same as enum i915_cache_level, - * so the switch-case statements in these PTE encode functions are still valid. - * See translation table LEGACY_CACHELEVEL. - */ static u64 snb_pte_encode(dma_addr_t addr, unsigned int pat_index, u32 flags) @@ -1023,11 +1018,11 @@ static u64 snb_pte_encode(dma_addr_t addr, gen6_pte_t pte = GEN6_PTE_ADDR_ENCODE(addr) | GEN6_PTE_VALID; switch (pat_index) { - case I915_CACHE_L3_LLC: - case I915_CACHE_LLC: + case I915_CACHE_MODE_WB: + case __I915_CACHE_MODE_WB_L3: pte |= GEN6_PTE_CACHE_LLC; break; - case I915_CACHE_NONE: + case I915_CACHE_MODE_UC: pte |= GEN6_PTE_UNCACHED; break; default: @@ -1044,13 +1039,13 @@ static u64 ivb_pte_encode(dma_addr_t addr, gen6_pte_t pte = GEN6_PTE_ADDR_ENCODE(addr) | GEN6_PTE_VALID; switch (pat_index) { - case I915_CACHE_L3_LLC: + case __I915_CACHE_MODE_WB_L3: pte |= GEN7_PTE_CACHE_L3_LLC; break; - case I915_CACHE_LLC: + case I915_CACHE_MODE_WB: pte |= GEN6_PTE_CACHE_LLC; break; - case I915_CACHE_NONE: + case I915_CACHE_MODE_UC: pte |= GEN6_PTE_UNCACHED; break; default: @@ -1069,7 +1064,7 @@ static u64 byt_pte_encode(dma_addr_t addr, if (!(flags & PTE_READ_ONLY)) pte |= BYT_PTE_WRITEABLE; - if (pat_index != I915_CACHE_NONE) + if (pat_index != I915_CACHE_MODE_UC) pte |= BYT_PTE_SNOOPED_BY_CPU_CACHES; return pte; @@ -1081,7 +1076,7 @@ static u64 hsw_pte_encode(dma_addr_t addr, { gen6_pte_t pte = HSW_PTE_ADDR_ENCODE(addr) | GEN6_PTE_VALID; - if (pat_index != I915_CACHE_NONE) + if (pat_index != I915_CACHE_MODE_UC) pte |= HSW_WB_LLC_AGE3; return pte; @@ -1094,9 +1089,9 @@ static u64 iris_pte_encode(dma_addr_t addr, gen6_pte_t pte = HSW_PTE_ADDR_ENCODE(addr) | GEN6_PTE_VALID; switch (pat_index) { - case I915_CACHE_NONE: + case I915_CACHE_MODE_UC: break; - case I915_CACHE_WT: + case I915_CACHE_MODE_WT: pte |= HSW_WT_ELLC_LLC_AGE3; break; default: diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_gmch.c b/drivers/gpu/drm/i915/gt/intel_ggtt_gmch.c index 866c416afb73..803c41ac4ccb 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt_gmch.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt_gmch.c @@ -21,7 +21,7 @@ static void gmch_ggtt_insert_page(struct i915_address_space *vm, unsigned int pat_index, u32 unused) { - unsigned int flags = (pat_index == I915_CACHE_NONE) ? + unsigned int flags = (pat_index == I915_CACHE_MODE_UC) ? AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY; intel_gmch_gtt_insert_page(addr, offset >> PAGE_SHIFT, flags); @@ -32,7 +32,7 @@ static void gmch_ggtt_insert_entries(struct i915_address_space *vm, unsigned int pat_index, u32 unused) { - unsigned int flags = (pat_index == I915_CACHE_NONE) ? + unsigned int flags = (pat_index == I915_CACHE_MODE_UC) ? AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY; intel_gmch_gtt_insert_sg_entries(vma_res->bi.pages, vma_res->start >> PAGE_SHIFT, diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 065099362a98..48055304537a 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -676,7 +676,7 @@ __vm_create_scratch_for_read(struct i915_address_space *vm, unsigned long size) if (IS_ERR(obj)) return ERR_CAST(obj); - i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(obj, I915_CACHE_CACHED); vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(vma)) { diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 7192a534a654..af4277c1d577 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -636,7 +636,8 @@ void __set_pd_entry(struct i915_page_directory * const pd, const unsigned short idx, struct i915_page_table *pt, - u64 (*encode)(const dma_addr_t, const enum i915_cache_level)); + u64 (*encode)(const dma_addr_t, + const enum i915_cache_mode cache_mode)); #define set_pd_entry(pd, idx, to) \ __set_pd_entry((pd), (idx), px_pt(to), gen8_pde_encode) diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index 436756bfbb1a..3e461d4f3693 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -98,14 +98,16 @@ void __set_pd_entry(struct i915_page_directory * const pd, const unsigned short idx, struct i915_page_table * const to, - u64 (*encode)(const dma_addr_t, const enum i915_cache_level)) + u64 (*encode)(const dma_addr_t, + const enum i915_cache_mode cache_mode)) { /* Each thread pre-pins the pd, and we may have a thread per pde. */ GEM_BUG_ON(atomic_read(px_used(pd)) > NALLOC * I915_PDES); atomic_inc(px_used(pd)); pd->entry[idx] = to; - write_dma_entry(px_base(pd), idx, encode(px_dma(to), I915_CACHE_LLC)); + write_dma_entry(px_base(pd), idx, + encode(px_dma(to), I915_CACHE_MODE_WB)); } void diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 92085ffd23de..9131d228d285 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -551,7 +551,9 @@ alloc_context_vma(struct intel_engine_cs *engine) * later platforms don't have L3 control bits in the PTE. */ if (IS_IVYBRIDGE(i915)) - i915_gem_object_set_cache_coherency(obj, I915_CACHE_L3_LLC); + i915_gem_object_set_cache_coherency(obj, + I915_CACHE_CACHED | + __I915_CACHE_FLAG(L3)); vma = i915_vma_instance(obj, &engine->gt->ggtt->vm, NULL); if (IS_ERR(vma)) { diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index b9640212d659..025ce54c886d 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -26,7 +26,7 @@ static struct i915_vma *hwsp_alloc(struct intel_gt *gt) if (IS_ERR(obj)) return ERR_CAST(obj); - i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(obj, I915_CACHE_CACHED); vma = i915_vma_instance(obj, >->ggtt->vm, NULL); if (IS_ERR(vma)) diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index 8b0d84f2aad2..fc278fa463b0 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -64,7 +64,7 @@ static int hang_init(struct hang *h, struct intel_gt *gt) goto err_hws; } - i915_gem_object_set_cache_coherency(h->hws, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(h->hws, I915_CACHE_CACHED); vaddr = i915_gem_object_pin_map_unlocked(h->hws, I915_MAP_WB); if (IS_ERR(vaddr)) { err = PTR_ERR(vaddr); diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c index 14a8b25b6204..d25990d33d44 100644 --- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c +++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c @@ -111,7 +111,7 @@ read_nonprivs(struct intel_context *ce) if (IS_ERR(result)) return result; - i915_gem_object_set_cache_coherency(result, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(result, I915_CACHE_CACHED); cs = i915_gem_object_pin_map_unlocked(result, I915_MAP_WB); if (IS_ERR(cs)) { diff --git a/drivers/gpu/drm/i915/i915_cache.c b/drivers/gpu/drm/i915/i915_cache.c index 06eb5933c719..f4ba1cb430d3 100644 --- a/drivers/gpu/drm/i915/i915_cache.c +++ b/drivers/gpu/drm/i915/i915_cache.c @@ -6,13 +6,88 @@ #include "i915_cache.h" #include "i915_drv.h" -void i915_cache_init(struct drm_i915_private *i915) +int i915_cache_init(struct drm_i915_private *i915) { - i915->pat_uc = i915_gem_get_pat_index(i915, I915_CACHE_NONE); - drm_info(&i915->drm, "Using PAT index %u for uncached access\n", - i915->pat_uc); + int ret; - i915->pat_wb = i915_gem_get_pat_index(i915, I915_CACHE_LLC); - drm_info(&i915->drm, "Using PAT index %u for write-back access\n", - i915->pat_wb); + ret = i915_cache_find_pat(i915, I915_CACHE_NONE); + if (ret < 0) { + drm_err(&i915->drm, + "Failed to find PAT index for uncached access\n"); + return -ENODEV; + } + drm_info(&i915->drm, "Using PAT index %u for uncached access\n", ret); + i915->pat_uc = ret; + + ret = i915_cache_find_pat(i915, I915_CACHE_CACHED); + if (ret < 0) { + drm_err(&i915->drm, + "Failed to find PAT index for write-back access\n"); + return -ENODEV; + } + drm_info(&i915->drm, "Using PAT index %u for write-back access\n", ret); + i915->pat_wb = ret; + + return 0; +} + +int i915_cache_find_pat(struct drm_i915_private *i915, i915_cache_t cache) +{ + const struct intel_device_info *info = INTEL_INFO(i915); + int i; + + for (i = 0; i < ARRAY_SIZE(info->cache_modes); i++) { + if (info->cache_modes[i] == cache) + return i; + } + + return -1; +} + +void i915_cache_print(char *buf, size_t buflen, const char *suffix, + i915_cache_t cache) +{ + const enum i915_cache_mode mode = I915_CACHE_MODE(cache); + static const char * const mode_str[] = { + [I915_CACHE_MODE_UC] = "UC", + [I915_CACHE_MODE_WB] = "WB", + [I915_CACHE_MODE_WT] = "WT", + [I915_CACHE_MODE_WC] = "WC", + }; + static const char * const flag_str[] = { + [ilog2(I915_CACHE_FLAG_COH1W)] = "1-Way-Coherent", + [ilog2(I915_CACHE_FLAG_COH2W)] = "2-Way-Coherent", + [ilog2(I915_CACHE_FLAG_L3)] = "L3", + [ilog2(I915_CACHE_FLAG_CLOS1)] = "CLOS1", + [ilog2(I915_CACHE_FLAG_CLOS2)] = "CLOS2", + }; + + if (mode > ARRAY_SIZE(mode_str)) { + snprintf(buf, buflen, "0x%x%s", cache, suffix ?: ""); + } else { + unsigned long flags = I915_CACHE_FLAGS(cache); + unsigned long bit; + int ret; + + ret = snprintf(buf, buflen, "%s", mode_str[mode]); + buf += ret; + buflen -= ret; + + /* + * Don't print "1-way-2-way", it would be confusing and 2-way + * implies 1-way anyway. + */ + if ((flags & (I915_CACHE_FLAG_COH1W | I915_CACHE_FLAG_COH2W)) == + (I915_CACHE_FLAG_COH1W | I915_CACHE_FLAG_COH2W)) + flags &= ~I915_CACHE_FLAG_COH1W; + + for_each_set_bit(bit, &flags, BITS_PER_TYPE(i915_cache_t)) { + ret = snprintf(buf, buflen, "-%s", flag_str[bit]); + buf += ret; + buflen -= ret; + } + + if (suffix) + snprintf(buf, buflen, "%s", suffix); + } } diff --git a/drivers/gpu/drm/i915/i915_cache.h b/drivers/gpu/drm/i915/i915_cache.h index cb68936fb8a2..d9e97318b942 100644 --- a/drivers/gpu/drm/i915/i915_cache.h +++ b/drivers/gpu/drm/i915/i915_cache.h @@ -6,8 +6,76 @@ #ifndef __I915_CACHE_H__ #define __I915_CACHE_H__ +#include + +struct drm_printer; + struct drm_i915_private; -void i915_cache_init(struct drm_i915_private *i915); +typedef u16 i915_cache_t; + +/* Cache modes */ +enum i915_cache_mode { + I915_CACHE_MODE_UC = 0, + I915_CACHE_MODE_WB, + __I915_CACHE_MODE_WB_L3, /* Special do-not-use entry for legacy 1:1 mapping. */ + I915_CACHE_MODE_WT, + I915_CACHE_MODE_WC, + I915_NUM_CACHE_MODES +}; + +/* Cache mode flag bits */ +#define I915_CACHE_FLAG_COH1W (0x1) +#define I915_CACHE_FLAG_COH2W (0x2) /* 1-way needs to be set too. */ +#define I915_CACHE_FLAG_L3 (0x4) +#define I915_CACHE_FLAG_CLOS1 (0x8) +#define I915_CACHE_FLAG_CLOS2 (0x10) + +/* + * Overloaded I915_CACHE() macro based on: + * https://stackoverflow.com/questions/3046889/optional-parameters-with-c-macros + * + * It is possible to call I915_CACHE with mode and zero or more flags as + * separate arguments. Ie these all work: + * + * I915_CACHE(WB) + * I915_CACHE(WB, COH1W, COH2W) + * I915_CACHE(WB, COH1W, COH2W, L3) + */ + +#define __I915_CACHE_FLAG(f) (I915_CACHE_FLAG_##f << 8) +#define __I915_CACHE(m, f) ((i915_cache_t)(I915_CACHE_MODE_##m | (f))) + +#define I915_CACHE_4(m, f1, f2, f3) __I915_CACHE(m, __I915_CACHE_FLAG(f1) | __I915_CACHE_FLAG(f2) | __I915_CACHE_FLAG(f3)) +#define I915_CACHE_3(m, f1, f2) __I915_CACHE(m, __I915_CACHE_FLAG(f1) | __I915_CACHE_FLAG(f2)) +#define I915_CACHE_2(m, f1) __I915_CACHE(m, __I915_CACHE_FLAG(f1)) +#define I915_CACHE_1(m) __I915_CACHE(m, 0) +#define I915_CACHE_0(m) __I915_CACHE(WC, 0) + +#define FUNC_CHOOSER(_f1, _f2, _f3, _f4, _f5, ...) _f5 +#define FUNC_RECOMPOSER(argsWithParentheses) FUNC_CHOOSER argsWithParentheses +#define CHOOSE_FROM_ARG_COUNT(...) FUNC_RECOMPOSER((__VA_ARGS__, I915_CACHE_4, I915_CACHE_3, I915_CACHE_2, I915_CACHE_1, )) +#define NO_ARG_EXPANDER() ,,,I915_CACHE_0 +#define MACRO_CHOOSER(...) CHOOSE_FROM_ARG_COUNT(NO_ARG_EXPANDER __VA_ARGS__ ()) + +#define I915_CACHE(...) MACRO_CHOOSER(__VA_ARGS__)(__VA_ARGS__) + +/* i915_cache_t mode and flags extraction helpers. */ +#define I915_CACHE_MODE(cache) \ + ((enum i915_cache_mode)(((i915_cache_t)(cache)) & 0xff)) +#define I915_CACHE_FLAGS(cache) \ + ((unsigned int)((((i915_cache_t)(cache) & 0xff00)) >> 8)) + +/* Helpers for i915 caching modes. */ +#define I915_CACHE_NONE I915_CACHE(UC) +#define I915_CACHE_CACHED I915_CACHE(WB, COH1W, COH2W) +#define I915_CACHE_WT I915_CACHE(WT) + +int i915_cache_init(struct drm_i915_private *i915); +int i915_cache_find_pat(struct drm_i915_private *i915, i915_cache_t cache); +void i915_cache_print(char *buf, size_t buflen, const char *suffix, + i915_cache_t cache); + +#define I915_CACHE_NAME_LEN (40) #endif /* __I915_CACHE_H__ */ diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 4de44cf1026d..4ec292011546 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -140,57 +140,18 @@ static const char *stringify_vma_type(const struct i915_vma *vma) return "ppgtt"; } -static const char *i915_cache_level_str(struct drm_i915_gem_object *obj) -{ - struct drm_i915_private *i915 = obj_to_i915(obj); - - if (IS_METEORLAKE(i915)) { - switch (obj->pat_index) { - case 0: return " WB"; - case 1: return " WT"; - case 2: return " UC"; - case 3: return " WB (1-Way Coh)"; - case 4: return " WB (2-Way Coh)"; - default: return " not defined"; - } - } else if (IS_PONTEVECCHIO(i915)) { - switch (obj->pat_index) { - case 0: return " UC"; - case 1: return " WC"; - case 2: return " WT"; - case 3: return " WB"; - case 4: return " WT (CLOS1)"; - case 5: return " WB (CLOS1)"; - case 6: return " WT (CLOS2)"; - case 7: return " WT (CLOS2)"; - default: return " not defined"; - } - } else if (GRAPHICS_VER(i915) >= 12) { - switch (obj->pat_index) { - case 0: return " WB"; - case 1: return " WC"; - case 2: return " WT"; - case 3: return " UC"; - default: return " not defined"; - } - } else { - switch (obj->pat_index) { - case 0: return " UC"; - case 1: return HAS_LLC(i915) ? - " LLC" : " snooped"; - case 2: return " L3+LLC"; - case 3: return " WT"; - default: return " not defined"; - } - } -} - void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) { + struct drm_i915_private *i915 = to_i915(obj->base.dev); + char buf[I915_CACHE_NAME_LEN]; struct i915_vma *vma; int pin_count = 0; + i915_cache_print(buf, sizeof(buf), + obj->pat_set_by_user ? "!" : NULL, + INTEL_INFO(i915)->cache_modes[obj->pat_index]); + seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", &obj->base, get_tiling_flag(obj), @@ -199,7 +160,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) obj->base.size / 1024, obj->read_domains, obj->write_domain, - i915_cache_level_str(obj), + buf, obj->mm.dirty ? " dirty" : "", obj->mm.madv == I915_MADV_DONTNEED ? " purgeable" : ""); if (obj->base.name) diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index bb2223cc3470..8663388a524f 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -241,7 +241,9 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv) i915_memcpy_init_early(dev_priv); intel_runtime_pm_init_early(&dev_priv->runtime_pm); - i915_cache_init(dev_priv); + ret = i915_cache_init(dev_priv); + if (ret < 0) + return ret; ret = i915_workqueues_init(dev_priv); if (ret < 0) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 896aa48ed089..814705cfeb12 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1144,19 +1144,6 @@ int i915_gem_init(struct drm_i915_private *dev_priv) unsigned int i; int ret; - /* - * In the proccess of replacing cache_level with pat_index a tricky - * dependency is created on the definition of the enum i915_cache_level. - * in case this enum is changed, PTE encode would be broken. - * Add a WARNING here. And remove when we completely quit using this - * enum - */ - BUILD_BUG_ON(I915_CACHE_NONE != 0 || - I915_CACHE_LLC != 1 || - I915_CACHE_L3_LLC != 2 || - I915_CACHE_WT != 3 || - I915_MAX_CACHE_LEVEL != 4); - /* We need to fallback to 4K pages if host doesn't support huge gtt. */ if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv)) RUNTIME_INFO(dev_priv)->page_sizes = I915_GTT_PAGE_SIZE_4K; diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index fcacdc21643c..565a60a1645d 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -32,6 +32,7 @@ #include "gt/intel_sa_media.h" #include "gem/i915_gem_object_types.h" +#include "i915_cache.h" #include "i915_driver.h" #include "i915_drv.h" #include "i915_pci.h" @@ -43,36 +44,43 @@ .__runtime.graphics.ip.ver = (x), \ .__runtime.media.ip.ver = (x) -#define LEGACY_CACHELEVEL \ - .cachelevel_to_pat = { \ - [I915_CACHE_NONE] = 0, \ - [I915_CACHE_LLC] = 1, \ - [I915_CACHE_L3_LLC] = 2, \ - [I915_CACHE_WT] = 3, \ +#define LEGACY_CACHE_MODES \ + .cache_modes = { \ + [I915_CACHE_MODE_UC] = I915_CACHE(UC), \ + [I915_CACHE_MODE_WB] = I915_CACHE(WB, COH1W, COH2W), \ + [__I915_CACHE_MODE_WB_L3] = I915_CACHE(WB, COH1W, COH2W, L3), \ + [I915_CACHE_MODE_WT] = I915_CACHE(WT), \ } -#define TGL_CACHELEVEL \ - .cachelevel_to_pat = { \ - [I915_CACHE_NONE] = 3, \ - [I915_CACHE_LLC] = 0, \ - [I915_CACHE_L3_LLC] = 0, \ - [I915_CACHE_WT] = 2, \ +#define GEN12_CACHE_MODES \ + .cache_modes = { \ + [0] = I915_CACHE(WB, COH1W, COH2W), \ + [1] = I915_CACHE(WC), \ + [2] = I915_CACHE(WT), \ + [3] = I915_CACHE(UC), \ } -#define PVC_CACHELEVEL \ - .cachelevel_to_pat = { \ - [I915_CACHE_NONE] = 0, \ - [I915_CACHE_LLC] = 3, \ - [I915_CACHE_L3_LLC] = 3, \ - [I915_CACHE_WT] = 2, \ +/* FIXME is 1-way or 2-way for 3, 5, 7 */ + +#define PVC_CACHE_MODES \ + .cache_modes = { \ + [0] = I915_CACHE(UC), \ + [1] = I915_CACHE(WC), \ + [2] = I915_CACHE(WT), \ + [3] = I915_CACHE(WB, COH1W), \ + [4] = I915_CACHE(WT, CLOS1), \ + [5] = I915_CACHE(WB, COH1W, CLOS1), \ + [6] = I915_CACHE(WT, CLOS2), \ + [7] = I915_CACHE(WB, COH1W, CLOS2), \ } -#define MTL_CACHELEVEL \ - .cachelevel_to_pat = { \ - [I915_CACHE_NONE] = 2, \ - [I915_CACHE_LLC] = 3, \ - [I915_CACHE_L3_LLC] = 3, \ - [I915_CACHE_WT] = 1, \ +#define MTL_CACHE_MODES \ + .cache_modes = { \ + [0] = I915_CACHE(WB), \ + [1] = I915_CACHE(WT), \ + [2] = I915_CACHE(UC), \ + [3] = I915_CACHE(WB, COH1W), \ + [4] = I915_CACHE(WB, COH1W, COH2W), \ } /* Keep in gen based order, and chronological order within a gen */ @@ -97,7 +105,7 @@ .max_pat_index = 3, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES #define I845_FEATURES \ GEN(2), \ @@ -112,7 +120,7 @@ .max_pat_index = 3, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES static const struct intel_device_info i830_info = { I830_FEATURES, @@ -145,7 +153,7 @@ static const struct intel_device_info i865g_info = { .max_pat_index = 3, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES static const struct intel_device_info i915g_info = { GEN3_FEATURES, @@ -208,7 +216,7 @@ static const struct intel_device_info pnv_m_info = { .max_pat_index = 3, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES static const struct intel_device_info i965g_info = { GEN4_FEATURES, @@ -252,7 +260,7 @@ static const struct intel_device_info gm45_info = { .max_pat_index = 3, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES static const struct intel_device_info ilk_d_info = { GEN5_FEATURES, @@ -282,7 +290,7 @@ static const struct intel_device_info ilk_m_info = { .__runtime.ppgtt_size = 31, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES #define SNB_D_PLATFORM \ GEN6_FEATURES, \ @@ -330,7 +338,7 @@ static const struct intel_device_info snb_m_gt2_info = { .__runtime.ppgtt_size = 31, \ GEN_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES #define IVB_D_PLATFORM \ GEN7_FEATURES, \ @@ -387,7 +395,7 @@ static const struct intel_device_info vlv_info = { .platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0), GEN_DEFAULT_PAGE_SIZES, GEN_DEFAULT_REGIONS, - LEGACY_CACHELEVEL, + LEGACY_CACHE_MODES }; #define G75_FEATURES \ @@ -473,7 +481,7 @@ static const struct intel_device_info chv_info = { .has_coherent_ggtt = false, GEN_DEFAULT_PAGE_SIZES, GEN_DEFAULT_REGIONS, - LEGACY_CACHELEVEL, + LEGACY_CACHE_MODES }; #define GEN9_DEFAULT_PAGE_SIZES \ @@ -536,7 +544,7 @@ static const struct intel_device_info skl_gt4_info = { .max_pat_index = 3, \ GEN9_DEFAULT_PAGE_SIZES, \ GEN_DEFAULT_REGIONS, \ - LEGACY_CACHELEVEL + LEGACY_CACHE_MODES static const struct intel_device_info bxt_info = { GEN9_LP_FEATURES, @@ -640,7 +648,7 @@ static const struct intel_device_info jsl_info = { #define GEN12_FEATURES \ GEN11_FEATURES, \ GEN(12), \ - TGL_CACHELEVEL, \ + GEN12_CACHE_MODES, \ .has_global_mocs = 1, \ .has_pxp = 1, \ .max_pat_index = 3 @@ -708,7 +716,7 @@ static const struct intel_device_info adl_p_info = { .__runtime.graphics.ip.ver = 12, \ .__runtime.graphics.ip.rel = 50, \ XE_HP_PAGE_SIZES, \ - TGL_CACHELEVEL, \ + GEN12_CACHE_MODES, \ .dma_mask_size = 46, \ .has_3d_pipeline = 1, \ .has_64bit_reloc = 1, \ @@ -803,7 +811,7 @@ static const struct intel_device_info pvc_info = { BIT(VCS0) | BIT(CCS0) | BIT(CCS1) | BIT(CCS2) | BIT(CCS3), .require_force_probe = 1, - PVC_CACHELEVEL, + PVC_CACHE_MODES }; static const struct intel_gt_definition xelpmp_extra_gt[] = { @@ -838,7 +846,7 @@ static const struct intel_device_info mtl_info = { .memory_regions = REGION_SMEM | REGION_STOLEN_LMEM, .platform_engine_mask = BIT(RCS0) | BIT(BCS0) | BIT(CCS0), .require_force_probe = 1, - MTL_CACHELEVEL, + MTL_CACHE_MODES }; #undef PLATFORM diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 04bc1f4a1115..973175a64534 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -1870,7 +1870,7 @@ static int alloc_oa_buffer(struct i915_perf_stream *stream) return PTR_ERR(bo); } - i915_gem_object_set_cache_coherency(bo, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(bo, I915_CACHE_CACHED); /* PreHSW required 512K alignment, HSW requires 16M */ vma = i915_vma_instance(bo, >->ggtt->vm, NULL); diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index dbfe6443457b..2ce13b7c48cb 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -27,6 +27,8 @@ #include +#include "i915_cache.h" + #include "intel_step.h" #include "gt/intel_engine_types.h" @@ -243,8 +245,8 @@ struct intel_device_info { */ const struct intel_runtime_info __runtime; - u32 cachelevel_to_pat[I915_MAX_CACHE_LEVEL]; - u32 max_pat_index; + i915_cache_t cache_modes[8]; + unsigned int max_pat_index; }; struct intel_driver_caps { diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c index f910ec9b6d2b..ba821e48baa5 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c @@ -267,7 +267,7 @@ static int igt_evict_for_cache_color(void *arg) err = PTR_ERR(obj); goto cleanup; } - i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(obj, I915_CACHE_CACHED); quirk_add(obj, &objects); vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, @@ -283,7 +283,7 @@ static int igt_evict_for_cache_color(void *arg) err = PTR_ERR(obj); goto cleanup; } - i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(obj, I915_CACHE_CACHED); quirk_add(obj, &objects); /* Neighbouring; same colour - should fit */ diff --git a/drivers/gpu/drm/i915/selftests/igt_spinner.c b/drivers/gpu/drm/i915/selftests/igt_spinner.c index 3c5e0952f1b8..4cfc5000d6ff 100644 --- a/drivers/gpu/drm/i915/selftests/igt_spinner.c +++ b/drivers/gpu/drm/i915/selftests/igt_spinner.c @@ -23,7 +23,7 @@ int igt_spinner_init(struct igt_spinner *spin, struct intel_gt *gt) err = PTR_ERR(spin->hws); goto err; } - i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_LLC); + i915_gem_object_set_cache_coherency(spin->hws, I915_CACHE_CACHED); spin->obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE); if (IS_ERR(spin->obj)) { diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index 1d1a457e2aee..8ae77bcf27fa 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -126,13 +126,13 @@ static const struct intel_device_info mock_info = { .memory_regions = REGION_SMEM, .platform_engine_mask = BIT(0), - /* simply use legacy cache level for mock device */ + /* Simply use legacy cache modes for the mock device. */ .max_pat_index = 3, - .cachelevel_to_pat = { - [I915_CACHE_NONE] = 0, - [I915_CACHE_LLC] = 1, - [I915_CACHE_L3_LLC] = 2, - [I915_CACHE_WT] = 3, + .cache_modes = { + [0] = I915_CACHE(UC), + [1] = I915_CACHE(WB, COH1W), + [2] = I915_CACHE(WB, COH1W, COH2W, L3), + [3] = I915_CACHE(WT), }, }; @@ -181,7 +181,7 @@ struct drm_i915_private *mock_gem_device(void) /* Set up device info and initial runtime info. */ intel_device_info_driver_create(i915, pdev->device, &mock_info); - i915_cache_init(i915); + WARN_ON(i915_cache_init(i915)); dev_pm_domain_set(&pdev->dev, &pm_domain); pm_runtime_enable(&pdev->dev); From patchwork Thu Jul 27 14:55:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C75F8C001E0 for ; Thu, 27 Jul 2023 14:55:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4677A10E5AC; Thu, 27 Jul 2023 14:55:28 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 747EA10E5A8; Thu, 27 Jul 2023 14:55:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469725; x=1722005725; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f2FaxZgYK02GAcdlHnVIXnfNfBRmmzzwwFFaYr6dJMM=; b=AqP8imjc2hiiDFQtPUhB7fwKIo74TOKMjJuRa+6ghp4QUV5h2wQSSh+o XOAYoNyi/wEsIDngSN+oz1b0xWxQhIRHIVp/ydRfC3LMBKdAq3bKWWETW XybtQYAkqVw0Gz6LbnTUTBfjb3hYHjvWGIPh9H50hIjSuFVGJOZuWKZmt 8zz4kveAPTYhfJCsDKVpwztGTjWpG4U6g2Q/XAD25qrmAQH5Xxv7eTerK L0rDYhcYeVZFSQ6BkE17K3th8sx0kfvMzOYnIMedMnRDEPIoyhnrgr9Ao 9iLiKTBMV1U2dtP/28T9LKH2Y458Fyqz369JjU4gcJUhU+1W49H9bs0/+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268412" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268412" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433739" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:25 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 5/8] drm/i915: Improve the vm_fault_gtt user PAT index restriction Date: Thu, 27 Jul 2023 15:55:01 +0100 Message-Id: <20230727145504.1919316-6-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Now that i915 understands the caching modes behind PAT indices, we can refine the check in vm_fault_gtt() to not reject the uncached PAT if it was set by userspace on a snoopable platform. Signed-off-by: Tvrtko Ursulin Cc: Fei Yang Cc: Matt Roper --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index cd7f8ded0d6f..9aa6ecf68432 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -382,17 +382,9 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf) goto err_reset; } - /* - * For objects created by userspace through GEM_CREATE with pat_index - * set by set_pat extension, coherency is managed by userspace, make - * sure we don't fail handling the vm fault by calling - * i915_gem_object_has_cache_level() which always return true for such - * objects. Otherwise this helper function would fall back to checking - * whether the object is un-cached. - */ - if (!((obj->pat_set_by_user || - i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_UC)) || - HAS_LLC(i915))) { + /* Access to snoopable pages through the GTT is incoherent. */ + if (!i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_UC) && + !HAS_LLC(i915)) { ret = -EFAULT; goto err_unpin; } From patchwork Thu Jul 27 14:55:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CEC2C001E0 for ; Thu, 27 Jul 2023 14:55:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0CABF10E5B1; Thu, 27 Jul 2023 14:55:30 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 26F7710E5AF; Thu, 27 Jul 2023 14:55:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469727; x=1722005727; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+qbhanDaamxDTlSoYGO+3FKw+hWs29j/CXQ8FdM/mTI=; b=biGruv4XjC7qsxvwUBErRBExSxMdw1oxfCnxr1lBtgfqSKgd29Xp38U+ OLik0Z22qCYQrepGWsf1y9uJCMh/KAoe32PxH5XnRSZF7OnyTSjY7SpMb LFIZFAFTLVRdH/5QK5xvMDWsIkXmYihlUXCmF+sBbchj50KT6ReYZYXki UH24fi9AXfeoucXSR1p2/l5nnkrPjBiwc8XUHgOH7MZZs6bifX6GZB4cw kR5BuQwReAzt1jkagHDIelsrAzB4OcyUw0uKFgUSR6FZuwPihubmZjLjo WzM+TWXkeKKpFcDWku5JShygauEZjgya70YE7kruZC36QfAv8K9+ZoZgp A==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268427" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268427" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433750" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:26 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 6/8] drm/i915: Lift the user PAT restriction from gpu_write_needs_clflush Date: Thu, 27 Jul 2023 15:55:02 +0100 Message-Id: <20230727145504.1919316-7-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Now that i915 understands the caching modes behind PAT indices, and having also special cased the Meteorlake snooping fully coherent mode, we can remove the user PAT check from gpu_write_needs_clflush(). Signed-off-by: Tvrtko Ursulin Cc: Fei Yang Cc: Matt Roper Reviewed-by: Matt Roper --- drivers/gpu/drm/i915/gem/i915_gem_domain.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index c15f83de33af..bf3a2fa0e539 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -41,12 +41,6 @@ static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj) if (IS_METEORLAKE(i915)) return false; - /* - * Always flush cache for UMD objects with PAT index set. - */ - if (obj->pat_set_by_user) - return true; - /* * Fully coherent cached access may end up with data in the CPU cache * which hasn't hit memory yet. From patchwork Thu Jul 27 14:55:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73E83C001E0 for ; Thu, 27 Jul 2023 14:55:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B7C7310E5B6; Thu, 27 Jul 2023 14:55:35 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9350C10E5AD; Thu, 27 Jul 2023 14:55:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469728; x=1722005728; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KbRRS4pkZrtjFwIn3n1e/Q5U1aqA6UT5BLRvhc1Gt3g=; b=gZrpgv35/9PvATlJNOQP1LtyLdm+GxPCUcM5WwHMzM++7bk7A+XEJ1Q0 Fe6Sh2Hxie3IDwoYQMXIsyV66XEFzl0cWdvee70MiawK9OoLnoZIRIwCI uwNjdTac7KB4ReA4iN/MbJajFHUITLShbEFOlt3KkuIUxW2z9i5Zbzuq2 FR5jFTBJA5EzFSQnFQV8i5P7xsM/xh635TUu4HByfeQn91rHyO6xFyWAC ds2DF+AlFgL5VFOf0poWUSiwtAVVxp3DmP/fUQs848WmzaKIBfP0LBmCf pt/68BopNsLv/Mlv5gKw0NRfQpSWUBBg2Zho+so1jsaIgexm31SSV+JOx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268441" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268441" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433758" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:28 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 7/8] drm/i915: Lift the user PAT restriction from use_cpu_reloc Date: Thu, 27 Jul 2023 15:55:03 +0100 Message-Id: <20230727145504.1919316-8-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Now that i915 understands the caching modes behind PAT indices, we can refine the check in use_cpu_reloc() to not reject the uncached PAT if it was set by userspace. Instead it can decide based on the presence of full coherency which should be functionally equivalent on legacy platforms. We can ignore WT since it is only used by the display, and we can ignore Meteorlake since it will fail on the existing "has_llc" condition before the object cache mode check. Signed-off-by: Tvrtko Ursulin Cc: Fei Yang Cc: Matt Roper --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 9d6e49c8a4c6..f74b33670bad 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -640,16 +640,9 @@ static inline int use_cpu_reloc(const struct reloc_cache *cache, if (DBG_FORCE_RELOC == FORCE_GTT_RELOC) return false; - /* - * For objects created by userspace through GEM_CREATE with pat_index - * set by set_pat extension, i915_gem_object_has_cache_level() always - * return true, otherwise the call would fall back to checking whether - * the object is un-cached. - */ return (cache->has_llc || obj->cache_dirty || - !(obj->pat_set_by_user || - i915_gem_object_has_cache_mode(obj, I915_CACHE_MODE_UC))); + i915_gem_object_has_cache_flag(obj, I915_CACHE_FLAG_COH2W)); } static int eb_reserve_vma(struct i915_execbuffer *eb, From patchwork Thu Jul 27 14:55:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13330221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0A0AC04FE2 for ; Thu, 27 Jul 2023 14:55:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 543C610E5B2; Thu, 27 Jul 2023 14:55:36 +0000 (UTC) Received: from mgamail.intel.com (unknown [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0576210E5B0; Thu, 27 Jul 2023 14:55:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469730; x=1722005730; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kD9quBf/SA2K77HBhUxBu97J9TXzK/yTvjKXlcj8wXM=; b=Mva6uJHaZPNIJoDb9CF0JPv3AWW3vm407H2iOLXPN88oaJUlLgKly8cg mtjCO0jt6HIu6q/HFdqlr2wUzjNCxEbBH8ZzgAa6p7ohA4/bpzfwutZXF vGL/FYA1WEa1sj+PGzI6TcZlK6bTLk2oxvN2hML5MplX36IsTwtrDUB9f 6wlo2coKPVxp2GxdCLszHjhlkCL+EGRELpOsIdE0YCGTooxFoPV1u3EvA axe/fMe13JFVau0JiNKcYkkDH0aI78I8rt1qeLLDMdl5o0fGb3/DQjDAo s218fHKXWQEdNZsBVZv/jr5/+bvSCRSrDlJFtu2aR6vkHSVdpLfaA7EiU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="399268450" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="399268450" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="870433761" Received: from jlenehan-mobl1.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.228.208]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:55:30 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [RFC 8/8] drm/i915: Refine the caching check in i915_gem_object_can_bypass_llc Date: Thu, 27 Jul 2023 15:55:04 +0100 Message-Id: <20230727145504.1919316-9-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> References: <20230727145504.1919316-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Matt Roper , Fei Yang , Tvrtko Ursulin Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Tvrtko Ursulin Now that i915 understands the caching modes behind PAT indices, we can refine the check in i915_gem_object_can_bypass_llc() to stop assuming any user PAT can bypass the shared cache (if there is any). Instead we can use the absence of I915_BO_CACHE_COHERENT_FOR_WRITE as the criteria, which is set for all caching modes where writes from the CPU side (in this case buffer clears before handing buffers over to userspace) are fully coherent with respect to reads from the GPU. Signed-off-by: Tvrtko Ursulin Cc: Fei Yang Cc: Matt Roper --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index ec1f0be43d0d..8c4b54bd3911 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -221,12 +221,6 @@ bool i915_gem_object_can_bypass_llc(struct drm_i915_gem_object *obj) if (!(obj->flags & I915_BO_ALLOC_USER)) return false; - /* - * Always flush cache for UMD objects at creation time. - */ - if (obj->pat_set_by_user) - return true; - /* * EHL and JSL add the 'Bypass LLC' MOCS entry, which should make it * possible for userspace to bypass the GTT caching bits set by the @@ -239,7 +233,17 @@ bool i915_gem_object_can_bypass_llc(struct drm_i915_gem_object *obj) * it, but since i915 takes the stance of always zeroing memory before * handing it to userspace, we need to prevent this. */ - return IS_JSL_EHL(i915); + if (IS_JSL_EHL(i915)) + return true; + + /* + * Any caching mode where writes via CPU cache are not coherent with + * the GPU needs explicit flushing to ensure GPU can not see stale data. + */ + if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) + return true; + + return false; } static void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)