From patchwork Thu Dec 9 15:45:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1900C433EF for ; Thu, 9 Dec 2021 17:02:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 87F7E10EF4B; Thu, 9 Dec 2021 16:55:08 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1119610E117; Thu, 9 Dec 2021 15:46:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064766; x=1670600766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bsXPukqlLCCJw3JWUSxomfYTFNFmna63t9/bTtAwnUw=; b=Q63tlVe5XNMClDQri6CdQZ/4xSh7+44iEWRnZp7+uIdHomEKcASWLAeK 2DHzR3Jd5BNAhGhlXhl4iU/sIJTzuZAvFDco6p07Q52Eu/NGiEbX3Si9r 1v3+sSzF6e3AaR+SsWy5mMV/WEMvRklu28E9Tlw6nq6ixg51CX7/M3h/W u9HcpN35jeIEIXQvpqJeBvouO16IgB2o18GIxG4+l6wAZ1oTeywep00rw taiNECPgtMljN7hiinGcRxjH0tgEkNcM1+ng3xZwz7JBvU/Ar8XE5NSXJ sNAwWU/TjozwDK75fPOgKjJg8D7zz0TbcmrB84Er5i5U6zCfn72iQahEi w==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916803" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916803" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:45:57 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535115" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:45:52 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:18 +0530 Message-Id: <20211209154533.4084-2-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 01/16] drm/i915/xehpsdv: enforce min GTT alignment X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld For local-memory objects we need to align the GTT addresses to 64K, both for the ppgtt and ggtt. We need to support vm->min_alignment > 4K, depending on the vm itself and the type of object we are inserting. With this in mind update the GTT selftests to take this into account. Signed-off-by: Matthew Auld Signed-off-by: Ramalingam C Cc: Joonas Lahtinen Cc: Rodrigo Vivi --- .../i915/gem/selftests/i915_gem_client_blt.c | 23 +++-- drivers/gpu/drm/i915/gt/intel_gtt.c | 9 ++ drivers/gpu/drm/i915/gt/intel_gtt.h | 9 ++ drivers/gpu/drm/i915/i915_vma.c | 4 + drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 96 ++++++++++++------- 5 files changed, 100 insertions(+), 41 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c index 8402ed925a69..6b9b861e43e5 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c @@ -39,6 +39,7 @@ struct tiled_blits { struct blit_buffer scratch; struct i915_vma *batch; u64 hole; + u64 align; u32 width; u32 height; }; @@ -410,14 +411,21 @@ tiled_blits_create(struct intel_engine_cs *engine, struct rnd_state *prng) goto err_free; } - hole_size = 2 * PAGE_ALIGN(WIDTH * HEIGHT * 4); + t->align = I915_GTT_PAGE_SIZE_2M; /* XXX worst case, derive from vm! */ + t->align = max(t->align, + i915_vm_min_alignment(t->ce->vm, INTEL_MEMORY_LOCAL)); + t->align = max(t->align, + i915_vm_min_alignment(t->ce->vm, INTEL_MEMORY_SYSTEM)); + + hole_size = 2 * round_up(WIDTH * HEIGHT * 4, t->align); hole_size *= 2; /* room to maneuver */ - hole_size += 2 * I915_GTT_MIN_ALIGNMENT; + hole_size += 2 * t->align; /* padding on either side */ mutex_lock(&t->ce->vm->mutex); memset(&hole, 0, sizeof(hole)); err = drm_mm_insert_node_in_range(&t->ce->vm->mm, &hole, - hole_size, 0, I915_COLOR_UNEVICTABLE, + hole_size, t->align, + I915_COLOR_UNEVICTABLE, 0, U64_MAX, DRM_MM_INSERT_BEST); if (!err) @@ -428,7 +436,7 @@ tiled_blits_create(struct intel_engine_cs *engine, struct rnd_state *prng) goto err_put; } - t->hole = hole.start + I915_GTT_MIN_ALIGNMENT; + t->hole = hole.start + t->align; pr_info("Using hole at %llx\n", t->hole); err = tiled_blits_create_buffers(t, WIDTH, HEIGHT, prng); @@ -455,7 +463,7 @@ static void tiled_blits_destroy(struct tiled_blits *t) static int tiled_blits_prepare(struct tiled_blits *t, struct rnd_state *prng) { - u64 offset = PAGE_ALIGN(t->width * t->height * 4); + u64 offset = round_up(t->width * t->height * 4, t->align); u32 *map; int err; int i; @@ -486,8 +494,7 @@ static int tiled_blits_prepare(struct tiled_blits *t, static int tiled_blits_bounce(struct tiled_blits *t, struct rnd_state *prng) { - u64 offset = - round_up(t->width * t->height * 4, 2 * I915_GTT_MIN_ALIGNMENT); + u64 offset = round_up(t->width * t->height * 4, 2 * t->align); int err; /* We want to check position invariant tiling across GTT eviction */ @@ -500,7 +507,7 @@ static int tiled_blits_bounce(struct tiled_blits *t, struct rnd_state *prng) /* Reposition so that we overlap the old addresses, and slightly off */ err = tiled_blit(t, - &t->buffers[2], t->hole + I915_GTT_MIN_ALIGNMENT, + &t->buffers[2], t->hole + t->align, &t->buffers[1], t->hole + 3 * offset / 2); if (err) return err; diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index b30e4478f098..cccac3ff0fa5 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -218,6 +218,15 @@ void i915_address_space_init(struct i915_address_space *vm, int subclass) GEM_BUG_ON(!vm->total); drm_mm_init(&vm->mm, 0, vm->total); + + memset64(vm->min_alignment, I915_GTT_MIN_ALIGNMENT, + ARRAY_SIZE(vm->min_alignment)); + + if (HAS_64K_PAGES(vm->i915)) { + vm->min_alignment[INTEL_MEMORY_LOCAL] = I915_GTT_PAGE_SIZE_64K; + vm->min_alignment[INTEL_MEMORY_STOLEN_LOCAL] = I915_GTT_PAGE_SIZE_64K; + } + vm->mm.head_node.color = I915_COLOR_UNEVICTABLE; INIT_LIST_HEAD(&vm->bound_list); diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 15b98321e89a..ff3867e69720 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -28,6 +28,8 @@ #include "gt/intel_reset.h" #include "i915_selftest.h" #include "i915_vma_types.h" +#include "i915_params.h" +#include "intel_memory_region.h" #define I915_GFP_ALLOW_FAIL (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN) @@ -224,6 +226,7 @@ struct i915_address_space { struct device *dma; u64 total; /* size addr space maps (ex. 2GB for ggtt) */ u64 reserved; /* size addr space reserved */ + u64 min_alignment[INTEL_MEMORY_STOLEN_LOCAL + 1]; unsigned int bind_async_flags; @@ -382,6 +385,12 @@ i915_vm_has_scratch_64K(struct i915_address_space *vm) return vm->scratch_order == get_order(I915_GTT_PAGE_SIZE_64K); } +static inline u64 i915_vm_min_alignment(struct i915_address_space *vm, + enum intel_memory_type type) +{ + return vm->min_alignment[type]; +} + static inline bool i915_vm_has_cache_coloring(struct i915_address_space *vm) { diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 927f0d4f8e11..73972bf4052b 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -698,6 +698,10 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags) } color = 0; + + if (HAS_64K_PAGES(vma->vm->i915) && i915_gem_object_is_lmem(vma->obj)) + alignment = max(alignment, I915_GTT_PAGE_SIZE_64K); + if (i915_vm_has_cache_coloring(vma->vm)) color = vma->obj->cache_level; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c index 46f4236039a9..fdb4bf88293b 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -237,6 +237,8 @@ static int lowlevel_hole(struct i915_address_space *vm, u64 hole_start, u64 hole_end, unsigned long end_time) { + const unsigned int min_alignment = + i915_vm_min_alignment(vm, INTEL_MEMORY_SYSTEM); I915_RND_STATE(seed_prng); struct i915_vma *mock_vma; unsigned int size; @@ -250,9 +252,10 @@ static int lowlevel_hole(struct i915_address_space *vm, I915_RND_SUBSTATE(prng, seed_prng); struct drm_i915_gem_object *obj; unsigned int *order, count, n; - u64 hole_size; + u64 hole_size, aligned_size; - hole_size = (hole_end - hole_start) >> size; + aligned_size = max_t(u32, ilog2(min_alignment), size); + hole_size = (hole_end - hole_start) >> aligned_size; if (hole_size > KMALLOC_MAX_SIZE / sizeof(u32)) hole_size = KMALLOC_MAX_SIZE / sizeof(u32); count = hole_size >> 1; @@ -273,8 +276,8 @@ static int lowlevel_hole(struct i915_address_space *vm, } GEM_BUG_ON(!order); - GEM_BUG_ON(count * BIT_ULL(size) > vm->total); - GEM_BUG_ON(hole_start + count * BIT_ULL(size) > hole_end); + GEM_BUG_ON(count * BIT_ULL(aligned_size) > vm->total); + GEM_BUG_ON(hole_start + count * BIT_ULL(aligned_size) > hole_end); /* Ignore allocation failures (i.e. don't report them as * a test failure) as we are purposefully allocating very @@ -297,10 +300,10 @@ static int lowlevel_hole(struct i915_address_space *vm, } for (n = 0; n < count; n++) { - u64 addr = hole_start + order[n] * BIT_ULL(size); + u64 addr = hole_start + order[n] * BIT_ULL(aligned_size); intel_wakeref_t wakeref; - GEM_BUG_ON(addr + BIT_ULL(size) > vm->total); + GEM_BUG_ON(addr + BIT_ULL(aligned_size) > vm->total); if (igt_timeout(end_time, "%s timed out before %d/%d\n", @@ -343,7 +346,7 @@ static int lowlevel_hole(struct i915_address_space *vm, } mock_vma->pages = obj->mm.pages; - mock_vma->node.size = BIT_ULL(size); + mock_vma->node.size = BIT_ULL(aligned_size); mock_vma->node.start = addr; with_intel_runtime_pm(vm->gt->uncore->rpm, wakeref) @@ -354,7 +357,7 @@ static int lowlevel_hole(struct i915_address_space *vm, i915_random_reorder(order, count, &prng); for (n = 0; n < count; n++) { - u64 addr = hole_start + order[n] * BIT_ULL(size); + u64 addr = hole_start + order[n] * BIT_ULL(aligned_size); intel_wakeref_t wakeref; GEM_BUG_ON(addr + BIT_ULL(size) > vm->total); @@ -398,8 +401,10 @@ static int fill_hole(struct i915_address_space *vm, { const u64 hole_size = hole_end - hole_start; struct drm_i915_gem_object *obj; + const unsigned int min_alignment = + i915_vm_min_alignment(vm, INTEL_MEMORY_SYSTEM); const unsigned long max_pages = - min_t(u64, ULONG_MAX - 1, hole_size/2 >> PAGE_SHIFT); + min_t(u64, ULONG_MAX - 1, (hole_size / 2) >> ilog2(min_alignment)); const unsigned long max_step = max(int_sqrt(max_pages), 2UL); unsigned long npages, prime, flags; struct i915_vma *vma; @@ -440,14 +445,17 @@ static int fill_hole(struct i915_address_space *vm, offset = p->offset; list_for_each_entry(obj, &objects, st_link) { + u64 aligned_size = round_up(obj->base.size, + min_alignment); + vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(vma)) continue; if (p->step < 0) { - if (offset < hole_start + obj->base.size) + if (offset < hole_start + aligned_size) break; - offset -= obj->base.size; + offset -= aligned_size; } err = i915_vma_pin(vma, 0, 0, offset | flags); @@ -469,22 +477,25 @@ static int fill_hole(struct i915_address_space *vm, i915_vma_unpin(vma); if (p->step > 0) { - if (offset + obj->base.size > hole_end) + if (offset + aligned_size > hole_end) break; - offset += obj->base.size; + offset += aligned_size; } } offset = p->offset; list_for_each_entry(obj, &objects, st_link) { + u64 aligned_size = round_up(obj->base.size, + min_alignment); + vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(vma)) continue; if (p->step < 0) { - if (offset < hole_start + obj->base.size) + if (offset < hole_start + aligned_size) break; - offset -= obj->base.size; + offset -= aligned_size; } if (!drm_mm_node_allocated(&vma->node) || @@ -505,22 +516,25 @@ static int fill_hole(struct i915_address_space *vm, } if (p->step > 0) { - if (offset + obj->base.size > hole_end) + if (offset + aligned_size > hole_end) break; - offset += obj->base.size; + offset += aligned_size; } } offset = p->offset; list_for_each_entry_reverse(obj, &objects, st_link) { + u64 aligned_size = round_up(obj->base.size, + min_alignment); + vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(vma)) continue; if (p->step < 0) { - if (offset < hole_start + obj->base.size) + if (offset < hole_start + aligned_size) break; - offset -= obj->base.size; + offset -= aligned_size; } err = i915_vma_pin(vma, 0, 0, offset | flags); @@ -542,22 +556,25 @@ static int fill_hole(struct i915_address_space *vm, i915_vma_unpin(vma); if (p->step > 0) { - if (offset + obj->base.size > hole_end) + if (offset + aligned_size > hole_end) break; - offset += obj->base.size; + offset += aligned_size; } } offset = p->offset; list_for_each_entry_reverse(obj, &objects, st_link) { + u64 aligned_size = round_up(obj->base.size, + min_alignment); + vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(vma)) continue; if (p->step < 0) { - if (offset < hole_start + obj->base.size) + if (offset < hole_start + aligned_size) break; - offset -= obj->base.size; + offset -= aligned_size; } if (!drm_mm_node_allocated(&vma->node) || @@ -578,9 +595,9 @@ static int fill_hole(struct i915_address_space *vm, } if (p->step > 0) { - if (offset + obj->base.size > hole_end) + if (offset + aligned_size > hole_end) break; - offset += obj->base.size; + offset += aligned_size; } } } @@ -610,6 +627,7 @@ static int walk_hole(struct i915_address_space *vm, const u64 hole_size = hole_end - hole_start; const unsigned long max_pages = min_t(u64, ULONG_MAX - 1, hole_size >> PAGE_SHIFT); + unsigned long min_alignment; unsigned long flags; u64 size; @@ -619,6 +637,8 @@ static int walk_hole(struct i915_address_space *vm, if (i915_is_ggtt(vm)) flags |= PIN_GLOBAL; + min_alignment = i915_vm_min_alignment(vm, INTEL_MEMORY_SYSTEM); + for_each_prime_number_from(size, 1, max_pages) { struct drm_i915_gem_object *obj; struct i915_vma *vma; @@ -637,7 +657,7 @@ static int walk_hole(struct i915_address_space *vm, for (addr = hole_start; addr + obj->base.size < hole_end; - addr += obj->base.size) { + addr += round_up(obj->base.size, min_alignment)) { err = i915_vma_pin(vma, 0, 0, addr | flags); if (err) { pr_err("%s bind failed at %llx + %llx [hole %llx- %llx] with err=%d\n", @@ -689,6 +709,7 @@ static int pot_hole(struct i915_address_space *vm, { struct drm_i915_gem_object *obj; struct i915_vma *vma; + unsigned int min_alignment; unsigned long flags; unsigned int pot; int err = 0; @@ -697,6 +718,8 @@ static int pot_hole(struct i915_address_space *vm, if (i915_is_ggtt(vm)) flags |= PIN_GLOBAL; + min_alignment = i915_vm_min_alignment(vm, INTEL_MEMORY_SYSTEM); + obj = i915_gem_object_create_internal(vm->i915, 2 * I915_GTT_PAGE_SIZE); if (IS_ERR(obj)) return PTR_ERR(obj); @@ -709,13 +732,13 @@ static int pot_hole(struct i915_address_space *vm, /* Insert a pair of pages across every pot boundary within the hole */ for (pot = fls64(hole_end - 1) - 1; - pot > ilog2(2 * I915_GTT_PAGE_SIZE); + pot > ilog2(2 * min_alignment); pot--) { u64 step = BIT_ULL(pot); u64 addr; - for (addr = round_up(hole_start + I915_GTT_PAGE_SIZE, step) - I915_GTT_PAGE_SIZE; - addr <= round_down(hole_end - 2*I915_GTT_PAGE_SIZE, step) - I915_GTT_PAGE_SIZE; + for (addr = round_up(hole_start + min_alignment, step) - min_alignment; + addr <= round_down(hole_end - (2 * min_alignment), step) - min_alignment; addr += step) { err = i915_vma_pin(vma, 0, 0, addr | flags); if (err) { @@ -760,6 +783,7 @@ static int drunk_hole(struct i915_address_space *vm, unsigned long end_time) { I915_RND_STATE(prng); + unsigned int min_alignment; unsigned int size; unsigned long flags; @@ -767,15 +791,18 @@ static int drunk_hole(struct i915_address_space *vm, if (i915_is_ggtt(vm)) flags |= PIN_GLOBAL; + min_alignment = i915_vm_min_alignment(vm, INTEL_MEMORY_SYSTEM); + /* Keep creating larger objects until one cannot fit into the hole */ for (size = 12; (hole_end - hole_start) >> size; size++) { struct drm_i915_gem_object *obj; unsigned int *order, count, n; struct i915_vma *vma; - u64 hole_size; + u64 hole_size, aligned_size; int err = -ENODEV; - hole_size = (hole_end - hole_start) >> size; + aligned_size = max_t(u32, ilog2(min_alignment), size); + hole_size = (hole_end - hole_start) >> aligned_size; if (hole_size > KMALLOC_MAX_SIZE / sizeof(u32)) hole_size = KMALLOC_MAX_SIZE / sizeof(u32); count = hole_size >> 1; @@ -815,7 +842,7 @@ static int drunk_hole(struct i915_address_space *vm, GEM_BUG_ON(vma->size != BIT_ULL(size)); for (n = 0; n < count; n++) { - u64 addr = hole_start + order[n] * BIT_ULL(size); + u64 addr = hole_start + order[n] * BIT_ULL(aligned_size); err = i915_vma_pin(vma, 0, 0, addr | flags); if (err) { @@ -867,11 +894,14 @@ static int __shrink_hole(struct i915_address_space *vm, { struct drm_i915_gem_object *obj; unsigned long flags = PIN_OFFSET_FIXED | PIN_USER; + unsigned int min_alignment; unsigned int order = 12; LIST_HEAD(objects); int err = 0; u64 addr; + min_alignment = i915_vm_min_alignment(vm, INTEL_MEMORY_SYSTEM); + /* Keep creating larger objects until one cannot fit into the hole */ for (addr = hole_start; addr < hole_end; ) { struct i915_vma *vma; @@ -912,7 +942,7 @@ static int __shrink_hole(struct i915_address_space *vm, } i915_vma_unpin(vma); - addr += size; + addr += round_up(size, min_alignment); /* * Since we are injecting allocation faults at random intervals, From patchwork Thu Dec 9 15:45:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 424B1C433EF for ; Thu, 9 Dec 2021 17:01:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5A2BF10E609; Thu, 9 Dec 2021 16:54:39 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8573410E120; Thu, 9 Dec 2021 15:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064766; x=1670600766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DrNXqwGgAO+1lUJRT2MqybX+A18nxjM8ixh5l/KKQik=; b=grI60tGN7j8/2VPH7PEsvtoKaOIeTEsgqF/jsU3HeqyrO5uiIrNPAcsg G2/DQPHn3xHCUzEtdnaLjmCsZUtMct0A+t30488Hzu+QvPqMWyJiGQbkg C7tntXJ1WC+cpjOWzXB6Fv/Ow3moaGL3N/qi8yi6j33bXl3UXs15oQ78F Ho54NLPITRgZUsl40eolyqifFOeTPwQVuh1VBfnW4C0C/ixt7g24PoGYy glAZmcFhEsEsp15eCthz7IStZcIy7ix5IPnXupiBRznZwI9X4YPAvv62N RzObdZ15NlKZJAI8YAPFGYVwNFOjg9Tzt/6QHbvNT4I8jMMu4bxUP18X9 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916822" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916822" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:45:58 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535129" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:45:55 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:19 +0530 Message-Id: <20211209154533.4084-3-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 02/16] drm/i915/xehpsdv: support 64K GTT pages X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld XEHPSDV optimises 64K GTT pages for local-memory, since everything should be allocated at 64K granularity. We say goodbye to sparse entries, and instead get a compact 256B page-table for 64K pages, which should be more cache friendly. 4K pages for local-memory are no longer supported by the HW. Signed-off-by: Matthew Auld Signed-off-by: Stuart Summers Signed-off-by: Ramalingam C Cc: Joonas Lahtinen Cc: Rodrigo Vivi --- .../gpu/drm/i915/gem/selftests/huge_pages.c | 60 ++++++++++ drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 109 +++++++++++++++++- drivers/gpu/drm/i915/gt/intel_gtt.h | 3 + drivers/gpu/drm/i915/gt/intel_ppgtt.c | 1 + 4 files changed, 170 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index c69c7d45aabc..bd8dc1a28022 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -1483,6 +1483,65 @@ static int igt_ppgtt_sanity_check(void *arg) return err; } +static int igt_ppgtt_compact(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct drm_i915_gem_object *obj; + int err; + + /* + * Simple test to catch issues with compact 64K pages -- since the pt is + * compacted to 256B that gives us 32 entries per pt, however since the + * backing page for the pt is 4K, any extra entries we might incorrectly + * write out should be ignored by the HW. If ever hit such a case this + * test should catch it since some of our writes would land in scratch. + */ + + if (!HAS_64K_PAGES(i915)) { + pr_info("device lacks compact 64K page support, skipping\n"); + return 0; + } + + if (!HAS_LMEM(i915)) { + pr_info("device lacks LMEM support, skipping\n"); + return 0; + } + + /* We want the range to cover multiple page-table boundaries. */ + obj = i915_gem_object_create_lmem(i915, SZ_4M, 0); + if (IS_ERR(obj)) + return err; + + err = i915_gem_object_pin_pages_unlocked(obj); + if (err) + goto out_put; + + if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_64K) { + pr_info("LMEM compact unable to allocate huge-page(s)\n"); + goto out_unpin; + } + + /* + * Disable 2M GTT pages by forcing the page-size to 64K for the GTT + * insertion. + */ + obj->mm.page_sizes.sg = I915_GTT_PAGE_SIZE_64K; + + err = igt_write_huge(i915, obj); + if (err) + pr_err("LMEM compact write-huge failed\n"); + +out_unpin: + i915_gem_object_unpin_pages(obj); +out_put: + i915_gem_object_put(obj); + + if (err == -ENOMEM) + err = 0; + + return err; +} + static int igt_tmpfs_fallback(void *arg) { struct drm_i915_private *i915 = arg; @@ -1740,6 +1799,7 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915) SUBTEST(igt_tmpfs_fallback), SUBTEST(igt_ppgtt_smoke_huge), SUBTEST(igt_ppgtt_sanity_check), + SUBTEST(igt_ppgtt_compact), }; if (!HAS_PPGTT(i915)) { diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index b012c50f7ce7..8d081497e87e 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -233,6 +233,8 @@ static u64 __gen8_ppgtt_clear(struct i915_address_space * const vm, start, end, lvl); } else { unsigned int count; + unsigned int pte = gen8_pd_index(start, 0); + unsigned int num_ptes; u64 *vaddr; count = gen8_pt_count(start, end); @@ -242,10 +244,18 @@ static u64 __gen8_ppgtt_clear(struct i915_address_space * const vm, atomic_read(&pt->used)); GEM_BUG_ON(!count || count >= atomic_read(&pt->used)); + num_ptes = count; + if (pt->is_compact) { + GEM_BUG_ON(num_ptes % 16); + GEM_BUG_ON(pte % 16); + num_ptes /= 16; + pte /= 16; + } + vaddr = px_vaddr(pt); - memset64(vaddr + gen8_pd_index(start, 0), + memset64(vaddr + pte, vm->scratch[0]->encode, - count); + num_ptes); atomic_sub(count, &pt->used); start += count; @@ -453,6 +463,96 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt, return idx; } +static void +xehpsdv_ppgtt_insert_huge(struct i915_vma *vma, + struct sgt_dma *iter, + enum i915_cache_level cache_level, + u32 flags) +{ + const gen8_pte_t pte_encode = vma->vm->pte_encode(0, cache_level, flags); + unsigned int rem = sg_dma_len(iter->sg); + u64 start = vma->node.start; + + GEM_BUG_ON(!i915_vm_is_4lvl(vma->vm)); + + do { + struct i915_page_directory * const pdp = + gen8_pdp_for_page_address(vma->vm, start); + struct i915_page_directory * const pd = + i915_pd_entry(pdp, __gen8_pte_index(start, 2)); + struct i915_page_table *pt = + i915_pt_entry(pd, __gen8_pte_index(start, 1)); + gen8_pte_t encode = pte_encode; + unsigned int page_size; + gen8_pte_t *vaddr; + u16 index, max; + + max = I915_PDES; + + if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_2M && + IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_2M) && + rem >= I915_GTT_PAGE_SIZE_2M && + !__gen8_pte_index(start, 0)) { + index = __gen8_pte_index(start, 1); + encode |= GEN8_PDE_PS_2M; + page_size = I915_GTT_PAGE_SIZE_2M; + + vaddr = px_vaddr(pd); + } else { + if (encode & GEN12_PPGTT_PTE_LM) { + GEM_BUG_ON(!i915_gem_object_is_lmem(vma->obj)); + GEM_BUG_ON(__gen8_pte_index(start, 0) % 16); + GEM_BUG_ON(rem < I915_GTT_PAGE_SIZE_64K); + GEM_BUG_ON(!IS_ALIGNED(iter->dma, + I915_GTT_PAGE_SIZE_64K)); + + index = __gen8_pte_index(start, 0) / 16; + page_size = I915_GTT_PAGE_SIZE_64K; + + max /= 16; + + vaddr = px_vaddr(pd); + vaddr[__gen8_pte_index(start, 1)] |= GEN12_PDE_64K; + + pt->is_compact = true; + } else { + GEM_BUG_ON(i915_gem_object_is_lmem(vma->obj)); + GEM_BUG_ON(pt->is_compact); + index = __gen8_pte_index(start, 0); + page_size = I915_GTT_PAGE_SIZE; + } + + vaddr = px_vaddr(pt); + } + + do { + GEM_BUG_ON(rem < page_size); + vaddr[index++] = encode | iter->dma; + + start += page_size; + iter->dma += page_size; + rem -= page_size; + if (iter->dma >= iter->max) { + iter->sg = __sg_next(iter->sg); + if (!iter->sg) + break; + + rem = sg_dma_len(iter->sg); + if (!rem) + break; + + iter->dma = sg_dma_address(iter->sg); + iter->max = iter->dma + rem; + + if (unlikely(!IS_ALIGNED(iter->dma, page_size))) + break; + } + } while (rem >= page_size && index < max); + + vma->page_sizes.gtt |= page_size; + } while (iter->sg && sg_dma_len(iter->sg)); +} + static void gen8_ppgtt_insert_huge(struct i915_vma *vma, struct sgt_dma *iter, enum i915_cache_level cache_level, @@ -585,7 +685,10 @@ static void gen8_ppgtt_insert(struct i915_address_space *vm, struct sgt_dma iter = sgt_dma(vma); if (vma->page_sizes.sg > I915_GTT_PAGE_SIZE) { - gen8_ppgtt_insert_huge(vma, &iter, cache_level, flags); + if (HAS_64K_PAGES(vm->i915)) + xehpsdv_ppgtt_insert_huge(vma, &iter, cache_level, flags); + else + gen8_ppgtt_insert_huge(vma, &iter, cache_level, flags); } else { u64 idx = vma->node.start >> GEN8_PTE_SHIFT; diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index ff3867e69720..85ff11ebcbd5 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -91,6 +91,8 @@ typedef u64 gen8_pte_t; #define GEN12_GGTT_PTE_LM BIT_ULL(1) +#define GEN12_PDE_64K BIT(6) + /* * Cacheability Control is a 4-bit value. The low three bits are stored in bits * 3:1 of the PTE, while the fourth bit is stored in bit 11 of the PTE. @@ -159,6 +161,7 @@ struct i915_page_table { atomic_t used; struct i915_page_table *stash; }; + bool is_compact; }; struct i915_page_directory { diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index 4396bfd630d8..b8238f5bc8b1 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -26,6 +26,7 @@ struct i915_page_table *alloc_pt(struct i915_address_space *vm) return ERR_PTR(-ENOMEM); } + pt->is_compact = false; atomic_set(&pt->used, 0); return pt; } From patchwork Thu Dec 9 15:45:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF932C433EF for ; Thu, 9 Dec 2021 17:02:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4AE6510E135; Thu, 9 Dec 2021 16:55:33 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id CE4CC10E12C; Thu, 9 Dec 2021 15:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064775; x=1670600775; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U8u7aSqNekGcRaNoxEaFavdosyfV0fdTbIqhhb/iWjs=; b=C4ap/wyf//ygwTS3LCL37nUyDB/IqhVsumRX4g7GriiGJ7rDausSPdpT 5xdZ9THhBLYjdJgkMU4DLzPC6h0xjO5g5dN++L7VgyMD88Xlfxnn3DeSt wpal3Ni9ON/LcQncWp7kyICrwfiQVc0AMoTyAYgJJ/8iy54RVG/wh2+7I xjt0x6pKhBcJZIdcPeKjeXvtmJ1XKLFfXOx1vEuem9fTYURqz02mXgWzL OKB/BwTvUKhC91vOnlJVR9iGHq696U5xrbTobOEV1x12mNYHQtb+HsZVw +AlyNkks4l0ddPREfvIh+mGg0xsxHbZeC8YMWUMAauiza++xkPQghu4JW w==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916877" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916877" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:00 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535151" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:45:58 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:20 +0530 Message-Id: <20211209154533.4084-4-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 03/16] drm/i915/xehpsdv: implement memory coloring X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld The basic idea is that each 2M block(page-table) has a color, depending on if the page-table is occupied by LMEM objects(64K) or SMEM objects(4K), where our goal is to prevent mixing 64K and 4K GTT pages in the page-table, which is not supported by the HW. Signed-off-by: Matthew Auld Signed-off-by: Stuart Summers Signed-off-by: Ramalingam C Cc: Joonas Lahtinen Cc: Rodrigo Vivi --- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 16 ++++++++++ drivers/gpu/drm/i915/gt/intel_gtt.h | 6 ++++ drivers/gpu/drm/i915/i915_gem_evict.c | 17 ++++++++++ drivers/gpu/drm/i915/i915_vma.c | 46 +++++++++++++++++++-------- 4 files changed, 71 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 8d081497e87e..5db11d8f7c7a 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -463,6 +463,19 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt, return idx; } +static void xehpsdv_ppgtt_color_adjust(const struct drm_mm_node *node, + unsigned long color, + u64 *start, + u64 *end) +{ + if (i915_node_color_differs(node, color)) + *start = round_up(*start, SZ_2M); + + node = list_next_entry(node, node_list); + if (i915_node_color_differs(node, color)) + *end = round_down(*end, SZ_2M); +} + static void xehpsdv_ppgtt_insert_huge(struct i915_vma *vma, struct sgt_dma *iter, @@ -903,6 +916,9 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, ppgtt->vm.alloc_scratch_dma = alloc_pt_dma; } + if (HAS_64K_PAGES(gt->i915)) + ppgtt->vm.mm.color_adjust = xehpsdv_ppgtt_color_adjust; + err = gen8_init_scratch(&ppgtt->vm); if (err) goto err_free; diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 85ff11ebcbd5..01e9a98846fb 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -400,6 +400,12 @@ i915_vm_has_cache_coloring(struct i915_address_space *vm) return i915_is_ggtt(vm) && vm->mm.color_adjust; } +static inline bool +i915_vm_has_memory_coloring(struct i915_address_space *vm) +{ + return !i915_is_ggtt(vm) && vm->mm.color_adjust; +} + static inline struct i915_ggtt * i915_vm_to_ggtt(struct i915_address_space *vm) { diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index 2b73ddb11c66..006bf4924c24 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -292,6 +292,13 @@ int i915_gem_evict_for_node(struct i915_address_space *vm, /* Always look at the page afterwards to avoid the end-of-GTT */ end += I915_GTT_PAGE_SIZE; + } else if (i915_vm_has_memory_coloring(vm)) { + /* + * Expand the search the cover the page-table boundries, in + * case we need to flip the color of the page-table(s). + */ + start = round_down(start, SZ_2M); + end = round_up(end, SZ_2M); } GEM_BUG_ON(start >= end); @@ -321,6 +328,16 @@ int i915_gem_evict_for_node(struct i915_address_space *vm, if (node->color == target->color) continue; } + } else if (i915_vm_has_memory_coloring(vm)) { + if (node->start + node->size <= target->start) { + if (node->color == target->color) + continue; + } + + if (node->start >= target->start + target->size) { + if (node->color == target->color) + continue; + } } if (i915_vma_is_pinned(vma)) { diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 73972bf4052b..05719648580f 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -613,6 +613,10 @@ bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long color) struct drm_mm_node *node = &vma->node; struct drm_mm_node *other; + /* Only valid to be called on an already inserted vma */ + GEM_BUG_ON(!drm_mm_node_allocated(node)); + GEM_BUG_ON(list_empty(&node->node_list)); + /* * On some machines we have to be careful when putting differing types * of snoopable memory together to avoid the prefetcher crossing memory @@ -620,22 +624,34 @@ bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long color) * these constraints apply and set the drm_mm.color_adjust * appropriately. */ - if (!i915_vm_has_cache_coloring(vma->vm)) - return true; - - /* Only valid to be called on an already inserted vma */ - GEM_BUG_ON(!drm_mm_node_allocated(node)); - GEM_BUG_ON(list_empty(&node->node_list)); + if (i915_vm_has_cache_coloring(vma->vm)) { + other = list_prev_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(other)) + return false; - other = list_prev_entry(node, node_list); - if (i915_node_color_differs(other, color) && - !drm_mm_hole_follows(other)) - return false; + other = list_next_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(node)) + return false; + /* + * On XEHPSDV we need to make sure we are not mixing LMEM and SMEM objects + * in the same page-table, i.e mixing 64K and 4K gtt pages in the same + * page-table. + */ + } else if (i915_vm_has_memory_coloring(vma->vm)) { + other = list_prev_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(other) && + !IS_ALIGNED(other->start + other->size, SZ_2M)) + return false; - other = list_next_entry(node, node_list); - if (i915_node_color_differs(other, color) && - !drm_mm_hole_follows(node)) - return false; + other = list_next_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(node) && + !IS_ALIGNED(other->start, SZ_2M)) + return false; + } return true; } @@ -704,6 +720,8 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags) if (i915_vm_has_cache_coloring(vma->vm)) color = vma->obj->cache_level; + else if (i915_vm_has_memory_coloring(vma->vm)) + color = i915_gem_object_is_lmem(vma->obj); if (flags & PIN_OFFSET_FIXED) { u64 offset = flags & PIN_OFFSET_MASK; From patchwork Thu Dec 9 15:45:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 167BAC4332F for ; Thu, 9 Dec 2021 17:03:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 40AA310F68D; Thu, 9 Dec 2021 16:55:42 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id C697B10E124; Thu, 9 Dec 2021 15:46:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064777; x=1670600777; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HVP39cYirHKNi3ThSXeTN7iysw6js4+X6/Z7AItPA+A=; b=k6AK5n0mXfDOV0Rfk9yey8fuIfr920VbeW4gHFG2nXn2RT4sfxVctZ68 4QdebBLUG2zLIqM4XyI3ZLV9wsSVX5szBoII+eKyL9gSP239JseMNtqn5 qO2fpJxvM87g1jC1Jai1WOpqEND2Dr85JKq0TM+t9oaFOfT8Y0agBmJlF r1Trr1nN8efXZCa6cKc5JpimPXSXaDKyOpuETdazX4Y1jz7GZMi4GEt9u JNsNuhF4C7qmSWpoE6u7wFx+lAkJakC+Bon7Ja14QvlrS+Jq8/3KqS8Lh oncrCpaos3TsZvZkmN820GIjV1u1f80EntDigcWO0P1zgDkko9epO8/K9 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916894" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916894" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:03 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535172" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:01 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:21 +0530 Message-Id: <20211209154533.4084-5-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 04/16] drm/i915/xehpsdv: Add has_flat_ccs to device info X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: CQ Tang , Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: CQ Tang Platforms of XeHP and beyond support 3D surface (buffer) compression and various compression formats. This is accomplished by an additional compression control state (CCS) stored for each surface. Gen 12 devices(TGL family and DG1) stores compression states in a separate region of memory. It is managed by user-space and has an associated set of user-space managed page tables used by hardware for address translation. In Xe HP and beyond (XEHPSDV, DG2, etc), there is a new feature introduced i.e Flat CCS. It replaced AUX page tables with a flat indexed region of device memory for storing compression states. Cc: Joonas Lahtinen Cc: Matthew Auld Signed-off-by: CQ Tang Signed-off-by: Ramalingam C Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/i915_drv.h | 2 ++ drivers/gpu/drm/i915/i915_pci.c | 1 + drivers/gpu/drm/i915/intel_device_info.h | 1 + 3 files changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index aeafce112dcd..ad2dd18f7622 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1543,6 +1543,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915, #define HAS_REGION(i915, i) (INTEL_INFO(i915)->memory_regions & (i)) #define HAS_LMEM(i915) HAS_REGION(i915, REGION_LMEM) +#define HAS_FLAT_CCS(dev_priv) (INTEL_INFO(dev_priv)->has_flat_ccs) + #define HAS_GT_UC(dev_priv) (INTEL_INFO(dev_priv)->has_gt_uc) #define HAS_POOLED_EU(dev_priv) (INTEL_INFO(dev_priv)->has_pooled_eu) diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index b523eb1ece5d..382e7278058a 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -1005,6 +1005,7 @@ static const struct intel_device_info adl_p_info = { XE_HP_PAGE_SIZES, \ .dma_mask_size = 46, \ .has_64bit_reloc = 1, \ + .has_flat_ccs = 1, \ .has_global_mocs = 1, \ .has_gt_uc = 1, \ .has_llc = 1, \ diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index 213ae2c07126..cbbb40e8451f 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -129,6 +129,7 @@ enum intel_ppgtt_type { func(has_64k_pages); \ func(gpu_reset_clobbers_display); \ func(has_reset_engine); \ + func(has_flat_ccs); \ func(has_global_mocs); \ func(has_gt_uc); \ func(has_l3_dpf); \ From patchwork Thu Dec 9 15:45:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00C01C433F5 for ; Thu, 9 Dec 2021 17:03:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CBB9710F73D; Thu, 9 Dec 2021 16:55:44 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id A733D10E117; Thu, 9 Dec 2021 15:46:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064783; x=1670600783; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8rz3Ej5M/1T08ZmtKsi3p6GnsKx9XPO94uEWkVR6qvc=; b=TPUSZtIumHIe/DsALpdycVwQgH7NrLYjZEIrJhFGwScVGuV8Wt4qCWNf vujpGhWfAFffAwi/GB81EzjIpoAK0K3QDkQl063o6+FqYwZa5HEBRdV4I kthaKlVxkr6KoiA2BzxUCiBu9hYwiSQeSTs5hSONspGV9Vxw/Pjd0tBom HjPIcdqhh++/Y236Y4f3Nqc3U4yVZQ0/+tGZcBStrE7i3fJiHMSwGXZS0 wIoMXGBlRsvp+idNqxeeSbk4V9MLiVG6X/5t0tzYVqOJWZThmnCocwu3T +GHJvQ3cSgHmJ3LhoPh8fWI5koTph1YZOja4SSevcxOj31Kz+JrqjpOaF w==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916920" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916920" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:06 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535182" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:03 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:22 +0530 Message-Id: <20211209154533.4084-6-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 05/16] drm/i915/lmem: Enable lmem for platforms with Flat CCS X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Abdiel Janulgue A portion of device memory is reserved for Flat CCS so usable device memory will be reduced by size of Flat CCS. Size of Flat CCS is specified in “XEHPSDV_FLAT_CCS_BASE_ADDR”. So to get effective device memory we need to subtract total device memory by Flat CCS memory size. Cc: Matthew Auld Signed-off-by: Abdiel Janulgue Signed-off-by: Ramalingam C --- drivers/gpu/drm/i915/gt/intel_gt.c | 19 ++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_gt.h | 1 + drivers/gpu/drm/i915/gt/intel_region_lmem.c | 22 +++++++++++++++++++-- drivers/gpu/drm/i915/i915_reg.h | 3 +++ 4 files changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index f2422d48be32..510cda6a163f 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -902,6 +902,25 @@ u32 intel_gt_read_register_fw(struct intel_gt *gt, i915_reg_t reg) return intel_uncore_read_fw(gt->uncore, reg); } +u32 intel_gt_read_register(struct intel_gt *gt, i915_reg_t reg) +{ + int type; + u8 sliceid, subsliceid; + + for (type = 0; type < NUM_STEERING_TYPES; type++) { + if (intel_gt_reg_needs_read_steering(gt, reg, type)) { + intel_gt_get_valid_steering(gt, type, &sliceid, + &subsliceid); + return intel_uncore_read_with_mcr_steering(gt->uncore, + reg, + sliceid, + subsliceid); + } + } + + return intel_uncore_read(gt->uncore, reg); +} + void intel_gt_info_print(const struct intel_gt_info *info, struct drm_printer *p) { diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h index 74e771871a9b..24b78398a587 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.h +++ b/drivers/gpu/drm/i915/gt/intel_gt.h @@ -84,6 +84,7 @@ static inline bool intel_gt_needs_read_steering(struct intel_gt *gt, } u32 intel_gt_read_register_fw(struct intel_gt *gt, i915_reg_t reg); +u32 intel_gt_read_register(struct intel_gt *gt, i915_reg_t reg); void intel_gt_info_print(const struct intel_gt_info *info, struct drm_printer *p); diff --git a/drivers/gpu/drm/i915/gt/intel_region_lmem.c b/drivers/gpu/drm/i915/gt/intel_region_lmem.c index fde2dcb59809..a358fa14372b 100644 --- a/drivers/gpu/drm/i915/gt/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/gt/intel_region_lmem.c @@ -205,8 +205,26 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) if (!IS_DGFX(i915)) return ERR_PTR(-ENODEV); - /* Stolen starts from GSMBASE on DG1 */ - lmem_size = intel_uncore_read64(uncore, GEN12_GSMBASE); + if (HAS_FLAT_CCS(i915)) { + u64 tile_stolen, flat_ccs_base_addr_reg, flat_ccs_base; + + lmem_size = pci_resource_len(pdev, 2); + flat_ccs_base_addr_reg = intel_gt_read_register(gt, XEHPSDV_FLAT_CCS_BASE_ADDR); + flat_ccs_base = (flat_ccs_base_addr_reg >> XEHPSDV_CCS_BASE_SHIFT) * SZ_64K; + tile_stolen = lmem_size - flat_ccs_base; + + /* If the FLAT_CCS_BASE_ADDR register is not populated, flag an error */ + if (tile_stolen == lmem_size) + DRM_ERROR("CCS_BASE_ADDR register did not have expected value\n"); + + lmem_size -= tile_stolen; + } else { + /* Stolen starts from GSMBASE without CCS */ + lmem_size = intel_uncore_read64(&i915->uncore, GEN12_GSMBASE); + if (GEM_WARN_ON(lmem_size > pci_resource_len(pdev, 2))) + return ERR_PTR(-ENODEV); + } + io_start = pci_resource_start(pdev, 2); if (GEM_WARN_ON(lmem_size > pci_resource_len(pdev, 2))) diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index d27ba273cc68..29f1cafb0f4b 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -12620,6 +12620,9 @@ enum skl_power_gate { #define SGGI_DIS REG_BIT(15) #define SGR_DIS REG_BIT(13) +#define XEHPSDV_FLAT_CCS_BASE_ADDR _MMIO(0x4910) +#define XEHPSDV_CCS_BASE_SHIFT 8 + /* gamt regs */ #define GEN8_L3_LRA_1_GPGPU _MMIO(0x4dd4) #define GEN8_L3_LRA_1_GPGPU_DEFAULT_VALUE_BDW 0x67F1427F /* max/min for LRA1/2 */ From patchwork Thu Dec 9 15:45:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04D4FC4332F for ; Thu, 9 Dec 2021 17:02:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6AAEE10F389; Thu, 9 Dec 2021 16:55:29 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id A42C510E117; Thu, 9 Dec 2021 15:46:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064784; x=1670600784; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dRMoTCA7eMOHxb3pZA1fmqHXvQq06AdDrVA1VGuKHHY=; b=OVfs4XCGRm/wD3IpzAdxIvoqcIcZaWm7RxnwHHNgo60114lKggmKv6KX KccT56e9sXAVrbEPVMKTHLEKu4cQh44lPCJg1QWesiR0c5Nm07zvY7sRK vRkI45ovBuWQ10ymIRlSZwgJKuxDPx4OkpSbVsfoxuz62Xm28XnQkWShy VKoyYf8Dh2c1eZeLkGbAPUUMQlHxvSUuiaJvllAwFUS2sjF+T1xLjBfpc duHDTllDRT72Wi2yNoUGfQTnUxZimKBBQzQGhfSllgXBrm86Aft16lMyT V5e0bsGWBlPABiejs6DsmYwp0I/2a6RFqf/MaKq13v8cHOMg1rbCPsECG Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916948" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916948" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:09 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535193" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:06 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:23 +0530 Message-Id: <20211209154533.4084-7-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 06/16] drm/i915/gt: Clear compress metadata for Xe_HP platforms X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: CQ Tang , Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Ayaz A Siddiqui Xe-HP and latest devices support Flat CCS which reserved a portion of the device memory to store compression metadata, during the clearing of device memory buffer object we also need to clear the associated CCS buffer. Flat CCS memory can not be directly accessed by S/W. Address of CCS buffer associated main BO is automatically calculated by device itself. KMD/UMD can only access this buffer indirectly using XY_CTRL_SURF_COPY_BLT cmd via the address of device memory buffer. v2: Fixed issues with platform naming [Lucas] Cc: CQ Tang Signed-off-by: Ayaz A Siddiqui Signed-off-by: Ramalingam C --- drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 14 +++ drivers/gpu/drm/i915/gt/intel_migrate.c | 120 ++++++++++++++++++- 2 files changed, 131 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h index f8253012d166..07bf5a1753bd 100644 --- a/drivers/gpu/drm/i915/gt/intel_gpu_commands.h +++ b/drivers/gpu/drm/i915/gt/intel_gpu_commands.h @@ -203,6 +203,20 @@ #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) #define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) +#define XY_CTRL_SURF_INSTR_SIZE 5 +#define MI_FLUSH_DW_SIZE 3 +#define XY_CTRL_SURF_COPY_BLT ((2 << 29) | (0x48 << 22) | 3) +#define SRC_ACCESS_TYPE_SHIFT 21 +#define DST_ACCESS_TYPE_SHIFT 20 +#define CCS_SIZE_SHIFT 8 +#define XY_CTRL_SURF_MOCS_SHIFT 25 +#define NUM_CCS_BYTES_PER_BLOCK 256 +#define NUM_CCS_BLKS_PER_XFER 1024 +#define INDIRECT_ACCESS 0 +#define DIRECT_ACCESS 1 +#define MI_FLUSH_LLC BIT(9) +#define MI_FLUSH_CCS BIT(16) + #define COLOR_BLT_CMD (2 << 29 | 0x40 << 22 | (5 - 2)) #define XY_COLOR_BLT_CMD (2 << 29 | 0x50 << 22) #define SRC_COPY_BLT_CMD (2 << 29 | 0x43 << 22) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 19a01878fee3..64ffaacac1e0 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -16,6 +16,7 @@ struct insert_pte_data { }; #define CHUNK_SZ SZ_8M /* ~1ms at 8GiB/s preemption delay */ +#define GET_CCS_SIZE(i915, size) (HAS_FLAT_CCS(i915) ? (size) >> 8 : 0) static bool engine_supports_migration(struct intel_engine_cs *engine) { @@ -488,15 +489,104 @@ intel_context_migrate_copy(struct intel_context *ce, return err; } -static int emit_clear(struct i915_request *rq, int size, u32 value) +static inline u32 *i915_flush_dw(u32 *cmd, u64 dst, u32 flags) +{ + /* Mask the 3 LSB to use the PPGTT address space */ + *cmd++ = MI_FLUSH_DW | flags; + *cmd++ = lower_32_bits(dst); + *cmd++ = upper_32_bits(dst); + + return cmd; +} + +static u32 calc_ctrl_surf_instr_size(struct drm_i915_private *i915, int size) +{ + u32 num_cmds, num_blks, total_size; + + if (!GET_CCS_SIZE(i915, size)) + return 0; + + /* + * XY_CTRL_SURF_COPY_BLT transfers CCS in 256 byte + * blocks. one XY_CTRL_SURF_COPY_BLT command can + * trnasfer upto 1024 blocks. + */ + num_blks = (GET_CCS_SIZE(i915, size) + + (NUM_CCS_BYTES_PER_BLOCK - 1)) >> 8; + num_cmds = (num_blks + (NUM_CCS_BLKS_PER_XFER - 1)) >> 10; + total_size = (XY_CTRL_SURF_INSTR_SIZE) * num_cmds; + + /* + * We need to add a flush before and after + * XY_CTRL_SURF_COPY_BLT + */ + total_size += 2 * MI_FLUSH_DW_SIZE; + return total_size; +} + +static u32 *_i915_ctrl_surf_copy_blt(u32 *cmd, u64 src_addr, u64 dst_addr, + u8 src_mem_access, u8 dst_mem_access, + int src_mocs, int dst_mocs, + u16 num_ccs_blocks) +{ + int i = num_ccs_blocks; + + /* + * The XY_CTRL_SURF_COPY_BLT instruction is used to copy the CCS + * data in and out of the CCS region. + * + * We can copy at most 1024 blocks of 256 bytes using one + * XY_CTRL_SURF_COPY_BLT instruction. + * + * In case we need to copy more than 1024 blocks, we need to add + * another instruction to the same batch buffer. + * + * 1024 blocks of 256 bytes of CCS represent a total 256KB of CCS. + * + * 256 KB of CCS represents 256 * 256 KB = 64 MB of LMEM. + */ + do { + /* + * We use logical AND with 1023 since the size field + * takes values which is in the range of 0 - 1023 + */ + *cmd++ = ((XY_CTRL_SURF_COPY_BLT) | + (src_mem_access << SRC_ACCESS_TYPE_SHIFT) | + (dst_mem_access << DST_ACCESS_TYPE_SHIFT) | + (((i - 1) & 1023) << CCS_SIZE_SHIFT)); + *cmd++ = lower_32_bits(src_addr); + *cmd++ = ((upper_32_bits(src_addr) & 0xFFFF) | + (src_mocs << XY_CTRL_SURF_MOCS_SHIFT)); + *cmd++ = lower_32_bits(dst_addr); + *cmd++ = ((upper_32_bits(dst_addr) & 0xFFFF) | + (dst_mocs << XY_CTRL_SURF_MOCS_SHIFT)); + src_addr += SZ_64M; + dst_addr += SZ_64M; + i -= NUM_CCS_BLKS_PER_XFER; + } while (i > 0); + + return cmd; +} + +static int emit_clear(struct i915_request *rq, + int size, + u32 value, + bool is_lmem) { const int ver = GRAPHICS_VER(rq->engine->i915); u32 instance = rq->engine->instance; u32 *cs; + struct drm_i915_private *i915 = rq->engine->i915; + u32 num_ccs_blks, ccs_ring_size; GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); - cs = intel_ring_begin(rq, ver >= 8 ? 8 : 6); + /* Clear flat css only when value is 0 */ + ccs_ring_size = (is_lmem && !value) ? + calc_ctrl_surf_instr_size(i915, size) + : 0; + + cs = intel_ring_begin(rq, ver >= 8 ? 8 + ccs_ring_size : 6); if (IS_ERR(cs)) return PTR_ERR(cs); @@ -519,6 +609,30 @@ static int emit_clear(struct i915_request *rq, int size, u32 value) *cs++ = value; } + if (is_lmem && HAS_FLAT_CCS(i915) && !value) { + num_ccs_blks = (GET_CCS_SIZE(i915, size) + + NUM_CCS_BYTES_PER_BLOCK - 1) >> 8; + /* + * Flat CCS surface can only be accessed via + * XY_CTRL_SURF_COPY_BLT CMD and using indirect + * mapping of associated LMEM. + * We can clear ccs surface by writing all 0s, + * so we will flush the previously cleared buffer + * and use it as a source. + */ + + cs = i915_flush_dw(cs, (u64)instance << 32, + MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = _i915_ctrl_surf_copy_blt(cs, + (u64)instance << 32, + (u64)instance << 32, + DIRECT_ACCESS, + INDIRECT_ACCESS, + 1, 1, + num_ccs_blks); + cs = i915_flush_dw(cs, (u64)instance << 32, + MI_FLUSH_LLC | MI_FLUSH_CCS); + } intel_ring_advance(rq, cs); return 0; } @@ -579,7 +693,7 @@ intel_context_migrate_clear(struct intel_context *ce, if (err) goto out_rq; - err = emit_clear(rq, len, value); + err = emit_clear(rq, len, value, is_lmem); /* Arbitration is re-enabled between requests. */ out_rq: From patchwork Thu Dec 9 15:45:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04985C433FE for ; Thu, 9 Dec 2021 17:02:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2BB0210E773; Thu, 9 Dec 2021 16:55:17 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7E1F510E117; Thu, 9 Dec 2021 15:46:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064787; x=1670600787; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UStfGI/WEuNVbvqxvL82wYrweqIAtn0XUNCdRUQlnaI=; b=YQbVUSOsGAIYq2JB6VUDe0qkI2eZPgYuwqEuXKSKA89yY89syzBWv42x U0H3IBKZ+xz16A3BmEn35YF9M0QPvLJsm4juJUCl5/SJFsEd0JnDafFdG QFKRRi5GiuQN3LiTvUc44QOjLzXzvKFL/JnueiuHMSrxfK+BofJZ/WYye y+i4EvJSjrXOJ5xZ1CKixrZFLEj79KOalzgO8HiTpjJeDpF46qajnMu/F b+pp5XQTKTJiWlOXTXi4yieLwSh8gDVqis58MG6zr35yZKgjpXO/4L5ek Zk/ew68m86AaeZqOUNW+nfsyPbAjn6HbrF9/gv984H/Uz45T5tr05qm89 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298916977" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298916977" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:11 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535203" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:09 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:24 +0530 Message-Id: <20211209154533.4084-8-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 07/16] drm/i915/dg2: Tile 4 plane format support X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Stanislav Lisovskiy Tile4 in bspec format is 4K tile organized into 64B subtiles with same basic shape as for legacy TileY which will be supported by Display13. v2: - Moved Tile4 associating struct for modifier/display to the beginning(Imre Deak) - Removed unneeded case I915_FORMAT_MOD_4_TILED modifier checks(Imre Deak) - Fixed I915_FORMAT_MOD_4_TILED to be 9 instead of 12 (Imre Deak) v3: - Rebased patch on top of new changes related to plane_caps. - Added static assert to check that PLANE_CTL_TILING_YF matches PLANE_CTL_TILING_4(Nanley Chery) - Fixed naming and layout description for Tile 4 in drm uapi header(Nanley Chery) Cc: Matt Roper Cc: Maarten Lankhorst Signed-off-by: Stanislav Lisovskiy Signed-off-by: Matt Roper Signed-off-by: Juha-Pekka Heikkilä --- drivers/gpu/drm/i915/display/intel_display.c | 1 + drivers/gpu/drm/i915/display/intel_fb.c | 15 +++++++++++- drivers/gpu/drm/i915/display/intel_fb.h | 1 + drivers/gpu/drm/i915/display/intel_fbc.c | 1 + .../drm/i915/display/intel_plane_initial.c | 1 + .../drm/i915/display/skl_universal_plane.c | 23 ++++++++++++------- drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/i915_pci.c | 1 + drivers/gpu/drm/i915/i915_reg.h | 1 + drivers/gpu/drm/i915/intel_device_info.h | 1 + drivers/gpu/drm/i915/intel_pm.c | 1 + include/uapi/drm/drm_fourcc.h | 11 +++++++++ 12 files changed, 49 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 128d4943a43b..83253c62b6d6 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -7777,6 +7777,7 @@ static int intel_atomic_check_async(struct intel_atomic_state *state, struct int case I915_FORMAT_MOD_X_TILED: case I915_FORMAT_MOD_Y_TILED: case I915_FORMAT_MOD_Yf_TILED: + case I915_FORMAT_MOD_4_TILED: break; default: drm_dbg_kms(&i915->drm, diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c index 23cfe2e5ce2a..46505c69fe72 100644 --- a/drivers/gpu/drm/i915/display/intel_fb.c +++ b/drivers/gpu/drm/i915/display/intel_fb.c @@ -135,11 +135,16 @@ struct intel_modifier_desc { INTEL_PLANE_CAP_CCS_MC) #define INTEL_PLANE_CAP_TILING_MASK (INTEL_PLANE_CAP_TILING_X | \ INTEL_PLANE_CAP_TILING_Y | \ - INTEL_PLANE_CAP_TILING_Yf) + INTEL_PLANE_CAP_TILING_Yf | \ + INTEL_PLANE_CAP_TILING_4) #define INTEL_PLANE_CAP_TILING_NONE 0 static const struct intel_modifier_desc intel_modifiers[] = { { + .modifier = I915_FORMAT_MOD_4_TILED, + .display_ver = { 13, 14 }, + .plane_caps = INTEL_PLANE_CAP_TILING_4, + }, { .modifier = I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS, .display_ver = { 12, 13 }, .plane_caps = INTEL_PLANE_CAP_TILING_Y | INTEL_PLANE_CAP_CCS_MC, @@ -545,6 +550,12 @@ intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane) return 128; else return 512; + case I915_FORMAT_MOD_4_TILED: + /* + * Each 4K tile consists of 64B(8*8) subtiles, with + * same shape as Y Tile(i.e 4*16B OWords) + */ + return 128; case I915_FORMAT_MOD_Y_TILED_CCS: if (intel_fb_is_ccs_aux_plane(fb, color_plane)) return 128; @@ -650,6 +661,7 @@ static unsigned int intel_fb_modifier_to_tiling(u64 fb_modifier) return I915_TILING_Y; case INTEL_PLANE_CAP_TILING_X: return I915_TILING_X; + case INTEL_PLANE_CAP_TILING_4: case INTEL_PLANE_CAP_TILING_Yf: case INTEL_PLANE_CAP_TILING_NONE: return I915_TILING_NONE; @@ -737,6 +749,7 @@ unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, case I915_FORMAT_MOD_Y_TILED_CCS: case I915_FORMAT_MOD_Yf_TILED_CCS: case I915_FORMAT_MOD_Y_TILED: + case I915_FORMAT_MOD_4_TILED: case I915_FORMAT_MOD_Yf_TILED: return 1 * 1024 * 1024; default: diff --git a/drivers/gpu/drm/i915/display/intel_fb.h b/drivers/gpu/drm/i915/display/intel_fb.h index ba9df8986c1e..12386f13a4e0 100644 --- a/drivers/gpu/drm/i915/display/intel_fb.h +++ b/drivers/gpu/drm/i915/display/intel_fb.h @@ -27,6 +27,7 @@ struct intel_plane_state; #define INTEL_PLANE_CAP_TILING_X BIT(3) #define INTEL_PLANE_CAP_TILING_Y BIT(4) #define INTEL_PLANE_CAP_TILING_Yf BIT(5) +#define INTEL_PLANE_CAP_TILING_4 BIT(6) bool intel_fb_is_ccs_modifier(u64 modifier); bool intel_fb_is_rc_ccs_cc_modifier(u64 modifier); diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c index 8be01b93015f..c62da58a7d5a 100644 --- a/drivers/gpu/drm/i915/display/intel_fbc.c +++ b/drivers/gpu/drm/i915/display/intel_fbc.c @@ -936,6 +936,7 @@ static bool tiling_is_valid(const struct intel_plane_state *plane_state) case I915_FORMAT_MOD_Y_TILED: case I915_FORMAT_MOD_Yf_TILED: return DISPLAY_VER(i915) >= 9; + case I915_FORMAT_MOD_4_TILED: case I915_FORMAT_MOD_X_TILED: return true; default: diff --git a/drivers/gpu/drm/i915/display/intel_plane_initial.c b/drivers/gpu/drm/i915/display/intel_plane_initial.c index 01ce1d72297f..4ae9730ceeff 100644 --- a/drivers/gpu/drm/i915/display/intel_plane_initial.c +++ b/drivers/gpu/drm/i915/display/intel_plane_initial.c @@ -126,6 +126,7 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc, case DRM_FORMAT_MOD_LINEAR: case I915_FORMAT_MOD_X_TILED: case I915_FORMAT_MOD_Y_TILED: + case I915_FORMAT_MOD_4_TILED: break; default: drm_dbg(&dev_priv->drm, diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index d5359cf3d270..f62ba027fcf9 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -762,6 +762,8 @@ static u32 skl_plane_ctl_tiling(u64 fb_modifier) return PLANE_CTL_TILED_X; case I915_FORMAT_MOD_Y_TILED: return PLANE_CTL_TILED_Y; + case I915_FORMAT_MOD_4_TILED: + return PLANE_CTL_TILED_4; case I915_FORMAT_MOD_Y_TILED_CCS: case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: return PLANE_CTL_TILED_Y | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; @@ -1990,9 +1992,7 @@ static bool gen12_plane_format_mod_supported(struct drm_plane *_plane, case DRM_FORMAT_Y216: case DRM_FORMAT_XVYU12_16161616: case DRM_FORMAT_XVYU16161616: - if (modifier == DRM_FORMAT_MOD_LINEAR || - modifier == I915_FORMAT_MOD_X_TILED || - modifier == I915_FORMAT_MOD_Y_TILED) + if (!intel_fb_is_ccs_modifier(modifier)) return true; fallthrough; default: @@ -2085,6 +2085,8 @@ static u8 skl_get_plane_caps(struct drm_i915_private *i915, caps |= INTEL_PLANE_CAP_TILING_Y; if (DISPLAY_VER(i915) < 12) caps |= INTEL_PLANE_CAP_TILING_Yf; + if (HAS_4TILE(i915)) + caps |= INTEL_PLANE_CAP_TILING_4; if (skl_plane_has_rc_ccs(i915, pipe, plane_id)) { caps |= INTEL_PLANE_CAP_CCS_RC; @@ -2257,6 +2259,7 @@ skl_get_initial_plane_config(struct intel_crtc *crtc, unsigned int aligned_height; struct drm_framebuffer *fb; struct intel_framebuffer *intel_fb; + static_assert(PLANE_CTL_TILED_YF == PLANE_CTL_TILED_4); if (!plane->get_hw_state(plane, &pipe)) return; @@ -2318,11 +2321,15 @@ skl_get_initial_plane_config(struct intel_crtc *crtc, else fb->modifier = I915_FORMAT_MOD_Y_TILED; break; - case PLANE_CTL_TILED_YF: - if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) - fb->modifier = I915_FORMAT_MOD_Yf_TILED_CCS; - else - fb->modifier = I915_FORMAT_MOD_Yf_TILED; + case PLANE_CTL_TILED_YF: /* aka PLANE_CTL_TILED_4 on XE_LPD+ */ + if (HAS_4TILE(dev_priv)) { + fb->modifier = I915_FORMAT_MOD_4_TILED; + } else { + if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) + fb->modifier = I915_FORMAT_MOD_Yf_TILED_CCS; + else + fb->modifier = I915_FORMAT_MOD_Yf_TILED; + } break; default: MISSING_CASE(tiling); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index ad2dd18f7622..cbcb5689391a 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1444,6 +1444,7 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915, #define CMDPARSER_USES_GGTT(dev_priv) (GRAPHICS_VER(dev_priv) == 7) #define HAS_LLC(dev_priv) (INTEL_INFO(dev_priv)->has_llc) +#define HAS_4TILE(dev_priv) (INTEL_INFO(dev_priv)->has_4tile) #define HAS_SNOOP(dev_priv) (INTEL_INFO(dev_priv)->has_snoop) #define HAS_EDRAM(dev_priv) ((dev_priv)->edram_size_mb) #define HAS_SECURE_BATCHES(dev_priv) (GRAPHICS_VER(dev_priv) < 6) diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index 382e7278058a..6cddd6ac0db8 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -1047,6 +1047,7 @@ static const struct intel_device_info dg2_info = { DGFX_FEATURES, .graphics.rel = 55, .media.rel = 55, + .has_4tile = 1, PLATFORM(INTEL_DG2), .has_64k_pages = 1, .platform_engine_mask = diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 29f1cafb0f4b..eb0dc1ec1744 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -7285,6 +7285,7 @@ enum { #define PLANE_CTL_TILED_X (1 << 10) #define PLANE_CTL_TILED_Y (4 << 10) #define PLANE_CTL_TILED_YF (5 << 10) +#define PLANE_CTL_TILED_4 (5 << 10) #define PLANE_CTL_ASYNC_FLIP (1 << 9) #define PLANE_CTL_FLIP_HORIZONTAL (1 << 8) #define PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE (1 << 4) /* TGL+ */ diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h index cbbb40e8451f..57835487a6c5 100644 --- a/drivers/gpu/drm/i915/intel_device_info.h +++ b/drivers/gpu/drm/i915/intel_device_info.h @@ -130,6 +130,7 @@ enum intel_ppgtt_type { func(gpu_reset_clobbers_display); \ func(has_reset_engine); \ func(has_flat_ccs); \ + func(has_4tile); \ func(has_global_mocs); \ func(has_gt_uc); \ func(has_l3_dpf); \ diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index abad48e1690e..a32dea144bb6 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -5381,6 +5381,7 @@ skl_compute_wm_params(const struct intel_crtc_state *crtc_state, } wp->y_tiled = modifier == I915_FORMAT_MOD_Y_TILED || + modifier == I915_FORMAT_MOD_4_TILED || modifier == I915_FORMAT_MOD_Yf_TILED || modifier == I915_FORMAT_MOD_Y_TILED_CCS || modifier == I915_FORMAT_MOD_Yf_TILED_CCS; diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 7f652c96845b..a146c6df1066 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -565,6 +565,17 @@ extern "C" { */ #define I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC fourcc_mod_code(INTEL, 8) +/* + * Intel Tile 4 layout + * + * This is a tiled layout using 4KB tiles in a row-major layout. It has the same + * shape as Tile Y at two granularities: 4KB (128B x 32) and 64B (16B x 4). It + * only differs from Tile Y at the 256B granularity in between. At this + * granularity, Tile Y has a shape of 16B x 32 rows, but this tiling has a shape + * of 64B x 8 rows. + */ +#define I915_FORMAT_MOD_4_TILED fourcc_mod_code(INTEL, 9) + /* * Tiled, NV12MT, grouped in 64 (pixels) x 32 (lines) -sized macroblocks * From patchwork Thu Dec 9 15:45:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B412C433FE for ; Thu, 9 Dec 2021 17:02:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AD00B10E6C5; Thu, 9 Dec 2021 16:55:13 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 595F489123; Thu, 9 Dec 2021 15:46:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064790; x=1670600790; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dbUtA5aW3wGaiPYniYGCTLc4GOkr49yscbbi241/UiM=; b=BzyNYYXKbWL0k8LMv8BrVZ7wo0YHOxabLFRm1Sufp0LIpLRCWXmEeu9f RJGCVBKvk5sVmbbt9OzED5aYk9SMpMhQDsT8YX+oiReEPMJ9r2FkVKbkJ 8FIji3BByxQSL65jZL0wn/yDr6PXeucRkiKR55A7NzcBY05rV5PAw271z I3a1GY0rf2cqXytqHiYj/zXZNn+O6cmns+3bKCthOs0uhx9SYroJueJdO MMidhXwMWPlsilID+cjECmZCQvBWP5snGBilEww09vY+RmSdcI/SApL/d xYFqhDwGjRbZrgnDnGwRKAc4WByAca4t6LlME583fTdk212ObH6fyEcQr Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917003" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917003" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:14 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535206" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:11 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:25 +0530 Message-Id: <20211209154533.4084-9-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 08/16] drm/i915/gtt: allow overriding the pt alignment X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld On some platforms we have alignment restrictions when accessing LMEM from the GTT. In the next patch few patches we need to be able to modify the page-tables directly via the GTT itself. Suggested-by: Ramalingam C Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Ramalingam C --- drivers/gpu/drm/i915/gt/intel_gtt.h | 10 +++++++++- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 16 ++++++++++++---- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 01e9a98846fb..5ca5caa667b8 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -199,6 +199,14 @@ void *__px_vaddr(struct drm_i915_gem_object *p); struct i915_vm_pt_stash { /* preallocated chains of page tables/directories */ struct i915_page_table *pt[2]; + /* + * Optionally override the alignment/size of the physical page that + * contains each PT. If not set defaults back to the usual + * I915_GTT_PAGE_SIZE_4K. This does not influence the other paging + * structures. MUST be a power-of-two. ONLY applicable on discrete + * platforms. + */ + int pt_sz; }; struct i915_vma_ops { @@ -586,7 +594,7 @@ void free_scratch(struct i915_address_space *vm); struct drm_i915_gem_object *alloc_pt_dma(struct i915_address_space *vm, int sz); struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz); -struct i915_page_table *alloc_pt(struct i915_address_space *vm); +struct i915_page_table *alloc_pt(struct i915_address_space *vm, int sz); struct i915_page_directory *alloc_pd(struct i915_address_space *vm); struct i915_page_directory *__alloc_pd(int npde); diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index b8238f5bc8b1..3c90aea25072 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -12,7 +12,7 @@ #include "gen6_ppgtt.h" #include "gen8_ppgtt.h" -struct i915_page_table *alloc_pt(struct i915_address_space *vm) +struct i915_page_table *alloc_pt(struct i915_address_space *vm, int sz) { struct i915_page_table *pt; @@ -20,7 +20,7 @@ struct i915_page_table *alloc_pt(struct i915_address_space *vm) if (unlikely(!pt)) return ERR_PTR(-ENOMEM); - pt->base = vm->alloc_pt_dma(vm, I915_GTT_PAGE_SIZE_4K); + pt->base = vm->alloc_pt_dma(vm, sz); if (IS_ERR(pt->base)) { kfree(pt); return ERR_PTR(-ENOMEM); @@ -219,17 +219,25 @@ int i915_vm_alloc_pt_stash(struct i915_address_space *vm, u64 size) { unsigned long count; - int shift, n; + int shift, n, pt_sz; shift = vm->pd_shift; if (!shift) return 0; + pt_sz = stash->pt_sz; + if (!pt_sz) + pt_sz = I915_GTT_PAGE_SIZE_4K; + else + GEM_BUG_ON(!IS_DGFX(vm->i915)); + + GEM_BUG_ON(!is_power_of_2(pt_sz)); + count = pd_count(size, shift); while (count--) { struct i915_page_table *pt; - pt = alloc_pt(vm); + pt = alloc_pt(vm, pt_sz); if (IS_ERR(pt)) { i915_vm_free_pt_stash(vm, stash); return PTR_ERR(pt); From patchwork Thu Dec 9 15:45:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 675E0C433FE for ; Thu, 9 Dec 2021 17:03:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 501A610F571; Thu, 9 Dec 2021 16:55:37 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2832C89179; Thu, 9 Dec 2021 15:46:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064792; x=1670600792; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vZjZKL71fdV+1+E/ZvT8ZQXILUKWicxMJ0ws1f3obak=; b=WXkaDvTjiS0i+aFonIZ8j+RMls1lgJnqP0I0asny95ZFOBNaxqQzVBAq r3Hfa/iXjxsH5ab31AtMNWqRpu1PNpOUI9DhLaQKvlWjvxoa/prLwEdxg xmNZWL7DIXNl14+gX5lk0jxOVuoSmUf9LYoOMB/LKY6AHBo8X8358g5Vk fy58JtEhXySGQgNzdtiq7iD7aquyvU2ZO30YO02Htpp80Pddwqz0/kCox 5yjD13x7iF9UMMZbekiiWMtTFbaIxhckrI77tHu/MorXj0LSqNI6CAX4f sM7JJcYibVfUUuIn3bKP8Cjkr6Iy0k5F9jE8HFPb47ZExzAqSABQmUBSc g==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917021" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917021" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:16 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535210" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:14 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:26 +0530 Message-Id: <20211209154533.4084-10-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 09/16] drm/i915/gtt: add xehpsdv_ppgtt_insert_entry X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld If this is LMEM then we get a 32 entry PT, with each PTE pointing to some 64K block of memory, otherwise it's just the usual 512 entry PT. This very much assumes the caller knows what they are doing. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Ramalingam C Reviewed-by: Ramalingam C --- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 50 ++++++++++++++++++++++++++-- 1 file changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 5db11d8f7c7a..b6e9bfecb023 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -728,13 +728,56 @@ static void gen8_ppgtt_insert_entry(struct i915_address_space *vm, gen8_pdp_for_page_index(vm, idx); struct i915_page_directory *pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2)); + struct i915_page_table *pt = i915_pt_entry(pd, gen8_pd_index(idx, 1)); gen8_pte_t *vaddr; - vaddr = px_vaddr(i915_pt_entry(pd, gen8_pd_index(idx, 1))); + GEM_BUG_ON(pt->is_compact); + + vaddr = px_vaddr(pt); vaddr[gen8_pd_index(idx, 0)] = gen8_pte_encode(addr, level, flags); clflush_cache_range(&vaddr[gen8_pd_index(idx, 0)], sizeof(*vaddr)); } +static void __xehpsdv_ppgtt_insert_entry_lm(struct i915_address_space *vm, + dma_addr_t addr, + u64 offset, + enum i915_cache_level level, + u32 flags) +{ + u64 idx = offset >> GEN8_PTE_SHIFT; + struct i915_page_directory * const pdp = + gen8_pdp_for_page_index(vm, idx); + struct i915_page_directory *pd = + i915_pd_entry(pdp, gen8_pd_index(idx, 2)); + struct i915_page_table *pt = i915_pt_entry(pd, gen8_pd_index(idx, 1)); + gen8_pte_t *vaddr; + + GEM_BUG_ON(!IS_ALIGNED(addr, SZ_64K)); + GEM_BUG_ON(!IS_ALIGNED(offset, SZ_64K)); + + if (!pt->is_compact) { + vaddr = px_vaddr(pd); + vaddr[gen8_pd_index(idx, 1)] |= GEN12_PDE_64K; + pt->is_compact = true; + } + + vaddr = px_vaddr(pt); + vaddr[gen8_pd_index(idx, 0) / 16] = gen8_pte_encode(addr, level, flags); +} + +static void xehpsdv_ppgtt_insert_entry(struct i915_address_space *vm, + dma_addr_t addr, + u64 offset, + enum i915_cache_level level, + u32 flags) +{ + if (flags & PTE_LM) + return __xehpsdv_ppgtt_insert_entry_lm(vm, addr, offset, + level, flags); + + return gen8_ppgtt_insert_entry(vm, addr, offset, level, flags); +} + static int gen8_init_scratch(struct i915_address_space *vm) { u32 pte_flags; @@ -937,7 +980,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, ppgtt->vm.bind_async_flags = I915_VMA_LOCAL_BIND; ppgtt->vm.insert_entries = gen8_ppgtt_insert; - ppgtt->vm.insert_page = gen8_ppgtt_insert_entry; + if (HAS_64K_PAGES(gt->i915)) + ppgtt->vm.insert_page = xehpsdv_ppgtt_insert_entry; + else + ppgtt->vm.insert_page = gen8_ppgtt_insert_entry; ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc; ppgtt->vm.clear_range = gen8_ppgtt_clear; ppgtt->vm.foreach = gen8_ppgtt_foreach; From patchwork Thu Dec 9 15:45:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F38E3C433EF for ; Thu, 9 Dec 2021 17:02:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 28B0110F0C4; Thu, 9 Dec 2021 16:55:17 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 02B99891BB; Thu, 9 Dec 2021 15:46:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064795; x=1670600795; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=thJsXxCjyi3w4N0AclwK+Sob4Oa4TzpqL/7XKSjAVhY=; b=JOHPlY2ho5Yr1HQy5iqLYLcyUxZxLRhCrXfBpt8kyFmICIpMaO712jqm rQMnf+hv/dYCn1R3bF2r9TXvVTf0UOUNAUcmRZgwp19aVzQM453Q/w/72 kUA+Jxtlh60oTqPKph4DNv0iGGewqxvWl2rkmzx2KU1XZHKngZ6i6t74o iCIb+hS/LjLqCJQTNW5M7SDac/HA6VmxN4egaASt9YD6WzHPVFjkMs3Tr 2CVOmHqS8b5eIfc+2A8rw3sgn4PoQHyttAm4weskjk1tWU1ZloJA/q2q8 EB7z134bzgH2syj/13ziurYbenlH2Or84Kz6mmZOhtZB0xDU7lvH8uZD4 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917051" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917051" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:19 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535215" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:16 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:27 +0530 Message-Id: <20211209154533.4084-11-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 10/16] drm/i915/migrate: add acceleration support for DG2 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld This is all kinds of awkward since we now have to contend with using 64K GTT pages when mapping anything in LMEM(including the page-tables themselves). Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Ramalingam C --- drivers/gpu/drm/i915/gt/intel_migrate.c | 189 +++++++++++++++++++----- 1 file changed, 150 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 64ffaacac1e0..0fb83d0bec91 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -33,6 +33,38 @@ static bool engine_supports_migration(struct intel_engine_cs *engine) return true; } +static void xehpsdv_toggle_pdes(struct i915_address_space *vm, + struct i915_page_table *pt, + void *data) +{ + struct insert_pte_data *d = data; + + /* + * Insert a dummy PTE into every PT that will map to LMEM to ensure + * we have a correctly setup PDE structure for later use. + */ + vm->insert_page(vm, 0, d->offset, I915_CACHE_NONE, PTE_LM); + GEM_BUG_ON(!pt->is_compact); + d->offset += SZ_2M; +} + +static void xehpsdv_insert_pte(struct i915_address_space *vm, + struct i915_page_table *pt, + void *data) +{ + struct insert_pte_data *d = data; + + /* + * We are playing tricks here, since the actual pt, from the hw + * pov, is only 256bytes with 32 entries, or 4096bytes with 512 + * entries, but we are still guaranteed that the physical + * alignment is 64K underneath for the pt, and we are careful + * not to access the space in the void. + */ + vm->insert_page(vm, px_dma(pt), d->offset, I915_CACHE_NONE, PTE_LM); + d->offset += SZ_64K; +} + static void insert_pte(struct i915_address_space *vm, struct i915_page_table *pt, void *data) @@ -75,7 +107,12 @@ static struct i915_address_space *migrate_vm(struct intel_gt *gt) * i.e. within the same non-preemptible window so that we do not switch * to another migration context that overwrites the PTE. * - * TODO: Add support for huge LMEM PTEs + * On platforms with HAS_64K_PAGES support we have three windows, and + * dedicate two windows just for mapping lmem pages(smem <-> smem is not + * a thing), since we are forced to use 64K GTT pages underneath which + * requires also modifying the PDE. An alternative might be to instead + * map the PD into the GTT, and then on the fly toggle the 4K/64K mode + * in the PDE from the same batch that also modifies the PTEs. */ vm = i915_ppgtt_create(gt, I915_BO_ALLOC_PM_EARLY); @@ -87,6 +124,9 @@ static struct i915_address_space *migrate_vm(struct intel_gt *gt) goto err_vm; } + if (HAS_64K_PAGES(gt->i915)) + stash.pt_sz = I915_GTT_PAGE_SIZE_64K; + /* * Each engine instance is assigned its own chunk in the VM, so * that we can run multiple instances concurrently @@ -106,14 +146,20 @@ static struct i915_address_space *migrate_vm(struct intel_gt *gt) * We copy in 8MiB chunks. Each PDE covers 2MiB, so we need * 4x2 page directories for source/destination. */ - sz = 2 * CHUNK_SZ; + if (HAS_64K_PAGES(gt->i915)) + sz = 3 * CHUNK_SZ; + else + sz = 2 * CHUNK_SZ; d.offset = base + sz; /* * We need another page directory setup so that we can write * the 8x512 PTE in each chunk. */ - sz += (sz >> 12) * sizeof(u64); + if (HAS_64K_PAGES(gt->i915)) + sz += (sz / SZ_2M) * SZ_64K; + else + sz += (sz >> 12) * sizeof(u64); err = i915_vm_alloc_pt_stash(&vm->vm, &stash, sz); if (err) @@ -134,7 +180,18 @@ static struct i915_address_space *migrate_vm(struct intel_gt *gt) goto err_vm; /* Now allow the GPU to rewrite the PTE via its own ppGTT */ - vm->vm.foreach(&vm->vm, base, d.offset - base, insert_pte, &d); + if (HAS_64K_PAGES(gt->i915)) { + vm->vm.foreach(&vm->vm, base, d.offset - base, + xehpsdv_insert_pte, &d); + d.offset = base + CHUNK_SZ; + vm->vm.foreach(&vm->vm, + d.offset, + 2 * CHUNK_SZ, + xehpsdv_toggle_pdes, &d); + } else { + vm->vm.foreach(&vm->vm, base, d.offset - base, + insert_pte, &d); + } } return &vm->vm; @@ -270,19 +327,38 @@ static int emit_pte(struct i915_request *rq, u64 offset, int length) { + bool has_64K_pages = HAS_64K_PAGES(rq->engine->i915); const u64 encode = rq->context->vm->pte_encode(0, cache_level, is_lmem ? PTE_LM : 0); struct intel_ring *ring = rq->ring; - int total = 0; + int pkt, dword_length; + u32 total = 0; + u32 page_size; u32 *hdr, *cs; - int pkt; GEM_BUG_ON(GRAPHICS_VER(rq->engine->i915) < 8); + page_size = I915_GTT_PAGE_SIZE; + dword_length = 0x400; + /* Compute the page directory offset for the target address range */ - offset >>= 12; - offset *= sizeof(u64); - offset += 2 * CHUNK_SZ; + if (has_64K_pages) { + GEM_BUG_ON(!IS_ALIGNED(offset, SZ_2M)); + + offset /= SZ_2M; + offset *= SZ_64K; + offset += 3 * CHUNK_SZ; + + if (is_lmem) { + page_size = I915_GTT_PAGE_SIZE_64K; + dword_length = 0x40; + } + } else { + offset >>= 12; + offset *= sizeof(u64); + offset += 2 * CHUNK_SZ; + } + offset += (u64)rq->engine->instance << 32; cs = intel_ring_begin(rq, 6); @@ -290,7 +366,7 @@ static int emit_pte(struct i915_request *rq, return PTR_ERR(cs); /* Pack as many PTE updates as possible into a single MI command */ - pkt = min_t(int, 0x400, ring->space / sizeof(u32) + 5); + pkt = min_t(int, dword_length, ring->space / sizeof(u32) + 5); pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5); hdr = cs; @@ -300,6 +376,8 @@ static int emit_pte(struct i915_request *rq, do { if (cs - hdr >= pkt) { + int dword_rem; + *hdr += cs - hdr - 2; *cs++ = MI_NOOP; @@ -311,7 +389,18 @@ static int emit_pte(struct i915_request *rq, if (IS_ERR(cs)) return PTR_ERR(cs); - pkt = min_t(int, 0x400, ring->space / sizeof(u32) + 5); + dword_rem = dword_length; + if (has_64K_pages) { + if (IS_ALIGNED(total, SZ_2M)) { + offset = round_up(offset, SZ_64K); + } else { + dword_rem = SZ_2M - (total & (SZ_2M - 1)); + dword_rem /= page_size; + dword_rem *= 2; + } + } + + pkt = min_t(int, dword_rem, ring->space / sizeof(u32) + 5); pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5); hdr = cs; @@ -320,13 +409,15 @@ static int emit_pte(struct i915_request *rq, *cs++ = upper_32_bits(offset); } + GEM_BUG_ON(!IS_ALIGNED(it->dma, page_size)); + *cs++ = lower_32_bits(encode | it->dma); *cs++ = upper_32_bits(encode | it->dma); offset += 8; - total += I915_GTT_PAGE_SIZE; + total += page_size; - it->dma += I915_GTT_PAGE_SIZE; + it->dma += page_size; if (it->dma >= it->max) { it->sg = __sg_next(it->sg); if (!it->sg || sg_dma_len(it->sg) == 0) @@ -357,7 +448,8 @@ static bool wa_1209644611_applies(int ver, u32 size) return height % 4 == 3 && height <= 8; } -static int emit_copy(struct i915_request *rq, int size) +static int emit_copy(struct i915_request *rq, + u32 dst_offset, u32 src_offset, int size) { const int ver = GRAPHICS_VER(rq->engine->i915); u32 instance = rq->engine->instance; @@ -372,31 +464,31 @@ static int emit_copy(struct i915_request *rq, int size) *cs++ = BLT_DEPTH_32 | PAGE_SIZE; *cs++ = 0; *cs++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; - *cs++ = CHUNK_SZ; /* dst offset */ + *cs++ = dst_offset; *cs++ = instance; *cs++ = 0; *cs++ = PAGE_SIZE; - *cs++ = 0; /* src offset */ + *cs++ = src_offset; *cs++ = instance; } else if (ver >= 8) { *cs++ = XY_SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (10 - 2); *cs++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | PAGE_SIZE; *cs++ = 0; *cs++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; - *cs++ = CHUNK_SZ; /* dst offset */ + *cs++ = dst_offset; *cs++ = instance; *cs++ = 0; *cs++ = PAGE_SIZE; - *cs++ = 0; /* src offset */ + *cs++ = src_offset; *cs++ = instance; } else { GEM_BUG_ON(instance); *cs++ = SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (6 - 2); *cs++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | PAGE_SIZE; *cs++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE; - *cs++ = CHUNK_SZ; /* dst offset */ + *cs++ = dst_offset; *cs++ = PAGE_SIZE; - *cs++ = 0; /* src offset */ + *cs++ = src_offset; } intel_ring_advance(rq, cs); @@ -424,6 +516,7 @@ intel_context_migrate_copy(struct intel_context *ce, GEM_BUG_ON(ce->ring->size < SZ_64K); do { + u32 src_offset, dst_offset; int len; rq = i915_request_create(ce); @@ -451,15 +544,28 @@ intel_context_migrate_copy(struct intel_context *ce, if (err) goto out_rq; - len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem, 0, - CHUNK_SZ); + src_offset = 0; + dst_offset = CHUNK_SZ; + if (HAS_64K_PAGES(ce->engine->i915)) { + GEM_BUG_ON(!src_is_lmem && !dst_is_lmem); + + src_offset = 0; + dst_offset = 0; + if (src_is_lmem) + src_offset = CHUNK_SZ; + if (dst_is_lmem) + dst_offset = 2 * CHUNK_SZ; + } + + len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem, + src_offset, CHUNK_SZ); if (len <= 0) { err = len; goto out_rq; } err = emit_pte(rq, &it_dst, dst_cache_level, dst_is_lmem, - CHUNK_SZ, len); + dst_offset, len); if (err < 0) goto out_rq; if (err < len) { @@ -471,7 +577,7 @@ intel_context_migrate_copy(struct intel_context *ce, if (err) goto out_rq; - err = emit_copy(rq, len); + err = emit_copy(rq, dst_offset, src_offset, len); /* Arbitration is re-enabled between requests. */ out_rq: @@ -569,18 +675,20 @@ static u32 *_i915_ctrl_surf_copy_blt(u32 *cmd, u64 src_addr, u64 dst_addr, } static int emit_clear(struct i915_request *rq, + u64 offset, int size, u32 value, bool is_lmem) { - const int ver = GRAPHICS_VER(rq->engine->i915); - u32 instance = rq->engine->instance; - u32 *cs; struct drm_i915_private *i915 = rq->engine->i915; + const int ver = GRAPHICS_VER(rq->engine->i915); u32 num_ccs_blks, ccs_ring_size; + u32 *cs; GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); + offset += (u64)rq->engine->instance << 32; + /* Clear flat css only when value is 0 */ ccs_ring_size = (is_lmem && !value) ? calc_ctrl_surf_instr_size(i915, size) @@ -595,17 +703,17 @@ static int emit_clear(struct i915_request *rq, *cs++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; *cs++ = 0; *cs++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; - *cs++ = 0; /* offset */ - *cs++ = instance; + *cs++ = lower_32_bits(offset); + *cs++ = upper_32_bits(offset); *cs++ = value; *cs++ = MI_NOOP; } else { - GEM_BUG_ON(instance); + GEM_BUG_ON(upper_32_bits(offset)); *cs++ = XY_COLOR_BLT_CMD | BLT_WRITE_RGBA | (6 - 2); *cs++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; *cs++ = 0; *cs++ = size >> PAGE_SHIFT << 16 | PAGE_SIZE / 4; - *cs++ = 0; + *cs++ = lower_32_bits(offset); *cs++ = value; } @@ -621,17 +729,15 @@ static int emit_clear(struct i915_request *rq, * and use it as a source. */ - cs = i915_flush_dw(cs, (u64)instance << 32, - MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = i915_flush_dw(cs, offset, MI_FLUSH_LLC | MI_FLUSH_CCS); cs = _i915_ctrl_surf_copy_blt(cs, - (u64)instance << 32, - (u64)instance << 32, + offset, + offset, DIRECT_ACCESS, INDIRECT_ACCESS, 1, 1, num_ccs_blks); - cs = i915_flush_dw(cs, (u64)instance << 32, - MI_FLUSH_LLC | MI_FLUSH_CCS); + cs = i915_flush_dw(cs, offset, MI_FLUSH_LLC | MI_FLUSH_CCS); } intel_ring_advance(rq, cs); return 0; @@ -656,6 +762,7 @@ intel_context_migrate_clear(struct intel_context *ce, GEM_BUG_ON(ce->ring->size < SZ_64K); do { + u32 offset; int len; rq = i915_request_create(ce); @@ -683,7 +790,11 @@ intel_context_migrate_clear(struct intel_context *ce, if (err) goto out_rq; - len = emit_pte(rq, &it, cache_level, is_lmem, 0, CHUNK_SZ); + offset = 0; + if (HAS_64K_PAGES(ce->engine->i915) && is_lmem) + offset = CHUNK_SZ; + + len = emit_pte(rq, &it, cache_level, is_lmem, offset, CHUNK_SZ); if (len <= 0) { err = len; goto out_rq; @@ -693,7 +804,7 @@ intel_context_migrate_clear(struct intel_context *ce, if (err) goto out_rq; - err = emit_clear(rq, len, value, is_lmem); + err = emit_clear(rq, offset, len, value, is_lmem); /* Arbitration is re-enabled between requests. */ out_rq: From patchwork Thu Dec 9 15:45:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F39C5C433FE for ; Thu, 9 Dec 2021 16:58:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9E5C610E9DA; Thu, 9 Dec 2021 16:53:52 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 86A04891BB; Thu, 9 Dec 2021 15:46:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064796; x=1670600796; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+Ms4o8MOBp9yV/IwgCjbnr1MGM81PSz+4/En43u0BfE=; b=e9NfwvwZfYC/xF7lpVFL0TaSZW0w2WHzL/Uf2y4NzWhDQ2b+V1b0opk2 hCVKcGj2zU4OcOlNFc9v/XG7MwfZ/i8Zl1ng1kMqy2i6rslZoUstnUN4u 8eNqzgnm/rnXcwJ9qjbpUkipyOwDn84i2I2/gtDbTef7y55IPkhzo9uuY YGK0E06/FTbj3zJqKPOMrclDHopYu/1dy9IUlal+3C+J17POWgYTn348U Su/ac6e5voKtPFcqnfNaW07ImTqRz9FATFJzsyQ7bbv30gibvy4bV/ddi V8pIzMLSKR1RW2438ScBWnf7PLAhoNVK4oHfdep1uBWwjB0ApmS6PsTF6 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917074" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917074" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:22 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535223" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:19 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:28 +0530 Message-Id: <20211209154533.4084-12-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 11/16] drm/i915/dg2: Add DG2 unified compression X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matt Roper DG2 unifies render compression and media compression into a single format for the first time. The programming and buffer layout is supposed to match compression on older gen12 platforms, but the actual compression algorithm is different from any previous platform; as such, we need a new framebuffer modifier to represent buffers in this format, but otherwise we can re-use the existing gen12 compression driver logic. Signed-off-by: Matt Roper cc: Radhakrishna Sripada Signed-off-by: Mika Kahola (v2) cc: Anshuman Gupta Signed-off-by: Juha-Pekka Heikkilä Signed-off-by: Ramalingam C --- drivers/gpu/drm/i915/display/intel_fb.c | 13 ++++++++ .../drm/i915/display/skl_universal_plane.c | 33 +++++++++++++++---- include/uapi/drm/drm_fourcc.h | 22 +++++++++++++ 3 files changed, 61 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c index 46505c69fe72..e15216f1cb82 100644 --- a/drivers/gpu/drm/i915/display/intel_fb.c +++ b/drivers/gpu/drm/i915/display/intel_fb.c @@ -141,6 +141,14 @@ struct intel_modifier_desc { static const struct intel_modifier_desc intel_modifiers[] = { { + .modifier = I915_FORMAT_MOD_4_TILED_DG2_MC_CCS, + .display_ver = { 13, 14 }, + .plane_caps = INTEL_PLANE_CAP_TILING_4 | INTEL_PLANE_CAP_CCS_MC, + }, { + .modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS, + .display_ver = { 13, 14 }, + .plane_caps = INTEL_PLANE_CAP_TILING_4 | INTEL_PLANE_CAP_CCS_RC, + }, { .modifier = I915_FORMAT_MOD_4_TILED, .display_ver = { 13, 14 }, .plane_caps = INTEL_PLANE_CAP_TILING_4, @@ -550,6 +558,8 @@ intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane) return 128; else return 512; + case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS: + case I915_FORMAT_MOD_4_TILED_DG2_MC_CCS: case I915_FORMAT_MOD_4_TILED: /* * Each 4K tile consists of 64B(8*8) subtiles, with @@ -752,6 +762,9 @@ unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, case I915_FORMAT_MOD_4_TILED: case I915_FORMAT_MOD_Yf_TILED: return 1 * 1024 * 1024; + case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS: + case I915_FORMAT_MOD_4_TILED_DG2_MC_CCS: + return 16 * 1024; default: MISSING_CASE(fb->modifier); return 0; diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index f62ba027fcf9..d80424194c75 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -764,6 +764,14 @@ static u32 skl_plane_ctl_tiling(u64 fb_modifier) return PLANE_CTL_TILED_Y; case I915_FORMAT_MOD_4_TILED: return PLANE_CTL_TILED_4; + case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS: + return PLANE_CTL_TILED_4 | + PLANE_CTL_RENDER_DECOMPRESSION_ENABLE | + PLANE_CTL_CLEAR_COLOR_DISABLE; + case I915_FORMAT_MOD_4_TILED_DG2_MC_CCS: + return PLANE_CTL_TILED_4 | + PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE | + PLANE_CTL_CLEAR_COLOR_DISABLE; case I915_FORMAT_MOD_Y_TILED_CCS: case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: return PLANE_CTL_TILED_Y | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; @@ -2073,6 +2081,10 @@ static bool gen12_plane_has_mc_ccs(struct drm_i915_private *i915, if (IS_ADLP_DISPLAY_STEP(i915, STEP_A0, STEP_B0)) return false; + /* Wa_14013215631 */ + if (IS_DG2_DISPLAY_STEP(i915, STEP_A0, STEP_C0)) + return false; + return plane_id < PLANE_SPRITE4; } @@ -2312,18 +2324,25 @@ skl_get_initial_plane_config(struct intel_crtc *crtc, break; case PLANE_CTL_TILED_Y: plane_config->tiling = I915_TILING_Y; - if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) - fb->modifier = DISPLAY_VER(dev_priv) >= 12 ? - I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS : - I915_FORMAT_MOD_Y_TILED_CCS; - else if (val & PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE) + if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) { + if (DISPLAY_VER(dev_priv) >= 12) + fb->modifier = I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS; + else + fb->modifier = I915_FORMAT_MOD_Y_TILED_CCS; + } else if (val & PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE) { fb->modifier = I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS; - else + } else { fb->modifier = I915_FORMAT_MOD_Y_TILED; + } break; case PLANE_CTL_TILED_YF: /* aka PLANE_CTL_TILED_4 on XE_LPD+ */ if (HAS_4TILE(dev_priv)) { - fb->modifier = I915_FORMAT_MOD_4_TILED; + if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) + fb->modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS; + else if (val & PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE) + fb->modifier = I915_FORMAT_MOD_4_TILED_DG2_MC_CCS; + else + fb->modifier = I915_FORMAT_MOD_4_TILED; } else { if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) fb->modifier = I915_FORMAT_MOD_Yf_TILED_CCS; diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index a146c6df1066..51fdda26844a 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -576,6 +576,28 @@ extern "C" { */ #define I915_FORMAT_MOD_4_TILED fourcc_mod_code(INTEL, 9) +/* + * Intel color control surfaces (CCS) for DG2 render compression. + * + * DG2 uses a new compression format for render compression. The general + * layout is the same as I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS, + * but a new hashing/compression algorithm is used, so a fresh modifier must + * be associated with buffers of this type. Render compression uses 128 byte + * compression blocks. + */ +#define I915_FORMAT_MOD_4_TILED_DG2_RC_CCS fourcc_mod_code(INTEL, 10) + +/* + * Intel color control surfaces (CCS) for DG2 media compression. + * + * DG2 uses a new compression format for media compression. The general + * layout is the same as I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS, + * but a new hashing/compression algorithm is used, so a fresh modifier must + * be associated with buffers of this type. Media compression uses 256 byte + * compression blocks. + */ +#define I915_FORMAT_MOD_4_TILED_DG2_MC_CCS fourcc_mod_code(INTEL, 11) + /* * Tiled, NV12MT, grouped in 64 (pixels) x 32 (lines) -sized macroblocks * From patchwork Thu Dec 9 15:45:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD1ACC433EF for ; Thu, 9 Dec 2021 17:03:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 76E9610F57F; Thu, 9 Dec 2021 16:55:37 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id B90AF891BB; Thu, 9 Dec 2021 15:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064800; x=1670600800; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/YZINTUP80qemEcthVKIBo9gnQMSH6wAfWHwcFROzz4=; b=TMVzhsAt485Uep39+3y1gULjGuG2WwdTwG4JBew+V1u7fbryWjRdExpL jKVSQ7EM4WuutE+mTumFeRfKgvxtQtPyw/VHdQHjtBT1kFvgVgdhKA1kh 7tGZNSgdXmTtDkGlKsagD2xeRyS4A9Qv5wt4ern6t84p3VgC0gvasyx2P JkcfqtJP8IO7u+Xpjzn8LrOJ2qxf0Pbwe4WM8EOBuQ16XHUJqCi4fNJRj BxNM9uBINUWYkfiwHt3IU5eVN1tNPWWfvOjqNEVlKl57909Xa8ThCbUv8 J5wQsteq8GiI0t1wlwQiNCf1+WCChnbAaNWwfBgMHmTKrk/yvEzoDYfem Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917101" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917101" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:25 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535234" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:22 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:29 +0530 Message-Id: <20211209154533.4084-13-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 12/16] uapi/drm/dg2: Introduce format modifier for DG2 clear color X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Mika Kahola DG2 clear color render compression uses Tile4 layout. Therefore, we need to define a new format modifier for uAPI to support clear color rendering. Signed-off-by: Mika Kahola cc: Anshuman Gupta Signed-off-by: Juha-Pekka Heikkilä Signed-off-by: Ramalingam C --- drivers/gpu/drm/i915/display/intel_fb.c | 8 ++++++++ drivers/gpu/drm/i915/display/skl_universal_plane.c | 9 ++++++++- include/uapi/drm/drm_fourcc.h | 8 ++++++++ 3 files changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c index e15216f1cb82..f10e77cb5b4a 100644 --- a/drivers/gpu/drm/i915/display/intel_fb.c +++ b/drivers/gpu/drm/i915/display/intel_fb.c @@ -144,6 +144,12 @@ static const struct intel_modifier_desc intel_modifiers[] = { .modifier = I915_FORMAT_MOD_4_TILED_DG2_MC_CCS, .display_ver = { 13, 14 }, .plane_caps = INTEL_PLANE_CAP_TILING_4 | INTEL_PLANE_CAP_CCS_MC, + }, { + .modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC, + .display_ver = { 13, 14 }, + .plane_caps = INTEL_PLANE_CAP_TILING_4 | INTEL_PLANE_CAP_CCS_RC_CC, + + .ccs.cc_planes = BIT(1), }, { .modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS, .display_ver = { 13, 14 }, @@ -559,6 +565,7 @@ intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane) else return 512; case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS: + case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC: case I915_FORMAT_MOD_4_TILED_DG2_MC_CCS: case I915_FORMAT_MOD_4_TILED: /* @@ -763,6 +770,7 @@ unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, case I915_FORMAT_MOD_Yf_TILED: return 1 * 1024 * 1024; case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS: + case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC: case I915_FORMAT_MOD_4_TILED_DG2_MC_CCS: return 16 * 1024; default: diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index d80424194c75..9a89df9c0243 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -772,6 +772,8 @@ static u32 skl_plane_ctl_tiling(u64 fb_modifier) return PLANE_CTL_TILED_4 | PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE | PLANE_CTL_CLEAR_COLOR_DISABLE; + case I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC: + return PLANE_CTL_TILED_4 | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; case I915_FORMAT_MOD_Y_TILED_CCS: case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: return PLANE_CTL_TILED_Y | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; @@ -2337,10 +2339,15 @@ skl_get_initial_plane_config(struct intel_crtc *crtc, break; case PLANE_CTL_TILED_YF: /* aka PLANE_CTL_TILED_4 on XE_LPD+ */ if (HAS_4TILE(dev_priv)) { - if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) + u32 rc_mask = PLANE_CTL_RENDER_DECOMPRESSION_ENABLE | + PLANE_CTL_CLEAR_COLOR_DISABLE; + + if ((val & rc_mask) == rc_mask) fb->modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS; else if (val & PLANE_CTL_MEDIA_DECOMPRESSION_ENABLE) fb->modifier = I915_FORMAT_MOD_4_TILED_DG2_MC_CCS; + else if (val & PLANE_CTL_RENDER_DECOMPRESSION_ENABLE) + fb->modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC; else fb->modifier = I915_FORMAT_MOD_4_TILED; } else { diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 51fdda26844a..b155f69f2344 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -598,6 +598,14 @@ extern "C" { */ #define I915_FORMAT_MOD_4_TILED_DG2_MC_CCS fourcc_mod_code(INTEL, 11) +/* + * Intel color control surfaces (CCS) for DG2 clear color render compression. + * + * DG2 uses a unified compression format for clear color render compression. + * The general layout is a tiled layout using 4Kb tiles i.e. Tile4 layout. + */ +#define I915_FORMAT_MOD_4_TILED_DG2_RC_CCS_CC fourcc_mod_code(INTEL, 12) + /* * Tiled, NV12MT, grouped in 64 (pixels) x 32 (lines) -sized macroblocks * From patchwork Thu Dec 9 15:45:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1DA3BC433EF for ; Thu, 9 Dec 2021 17:02:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B350310F28A; Thu, 9 Dec 2021 16:55:25 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id D1343891D4; Thu, 9 Dec 2021 15:46:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064803; x=1670600803; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9OlL+gZZbnMSDuqtV81u4kMOiLsRp8FFYXagysWNZNo=; b=HRIv6+dIwpzsuj9I1CwLxCqWE7dU5Co0Worj0R9N7l24BKULUwfPUwkj 6qobDBeLhguBwwZFd/PzzImLXzYEcvLV80Z9e9TiIZQoe7cNHavMDsVNr AvoO3CHgTZpvBWgP28vvRFVmDzkx+dMUrthBCXDoEC68qa6E0bNNFCPjC XP3hpoNzWSyhc4h3k5PrZd8Ywry12/YMrUSMOw10v9zF4vEAQR8n4l8K1 oB3bPWZSYF/llcGwmVgwQLHBSvnPmimO1cIIlcSv6vtkT5BEZnHfM+mCR 6FzzvqxteXTAc4BUB7+tCeR2MQwrfQQEgUDOdxBKslrYzJeyPLCFXasyt w==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917123" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917123" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:27 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535237" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:25 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:30 +0530 Message-Id: <20211209154533.4084-14-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 13/16] drm/i915/dg2: Flat CCS Support X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hellstrom Thomas , Matthew Auld Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Anshuman Gupta DG2 onwards discrete gfx has support for new flat CCS mapping, which brings in display feature in to avoid Aux walk for compressed surface. This support build on top of Flat CCS support added in XEHPSDV. FLAT CCS surface base address should be 64k aligned, Compressed displayable surfaces must use tile4 format. HAS: 1407880786 B.Spec : 7655 B.Spec : 53902 Cc: Mika Kahola Signed-off-by: Anshuman Gupta Signed-off-by: Juha-Pekka Heikkilä Signed-off-by: Ramalingam C --- drivers/gpu/drm/i915/display/intel_display.c | 4 ++- drivers/gpu/drm/i915/display/intel_fb.c | 32 +++++++++++++------ .../drm/i915/display/skl_universal_plane.c | 14 +++++--- 3 files changed, 35 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 83253c62b6d6..fd84ed0da41c 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -8628,7 +8628,9 @@ static void intel_atomic_prepare_plane_clear_colors(struct intel_atomic_state *s /* * The layout of the fast clear color value expected by HW - * (the DRM ABI requiring this value to be located in fb at offset 0 of plane#2): + * (the DRM ABI requiring this value to be located in fb at + * offset 0 of cc plane, plane #2 previous generations or + * plane #1 for flat ccs): * - 4 x 4 bytes per-channel value * (in surface type specific float/int format provided by the fb user) * - 8 bytes native color value used by the display diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c index f10e77cb5b4a..72040f580911 100644 --- a/drivers/gpu/drm/i915/display/intel_fb.c +++ b/drivers/gpu/drm/i915/display/intel_fb.c @@ -107,6 +107,21 @@ static const struct drm_format_info gen12_ccs_cc_formats[] = { .hsub = 1, .vsub = 1, .has_alpha = true }, }; +static const struct drm_format_info gen12_flat_ccs_cc_formats[] = { + { .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 2, + .char_per_block = { 4, 0 }, .block_w = { 1, 2 }, .block_h = { 1, 1 }, + .hsub = 1, .vsub = 1, }, + { .format = DRM_FORMAT_XBGR8888, .depth = 24, .num_planes = 2, + .char_per_block = { 4, 0 }, .block_w = { 1, 2 }, .block_h = { 1, 1 }, + .hsub = 1, .vsub = 1, }, + { .format = DRM_FORMAT_ARGB8888, .depth = 32, .num_planes = 2, + .char_per_block = { 4, 0 }, .block_w = { 1, 2 }, .block_h = { 1, 1 }, + .hsub = 1, .vsub = 1, .has_alpha = true }, + { .format = DRM_FORMAT_ABGR8888, .depth = 32, .num_planes = 2, + .char_per_block = { 4, 0 }, .block_w = { 1, 2 }, .block_h = { 1, 1 }, + .hsub = 1, .vsub = 1, .has_alpha = true }, +}; + struct intel_modifier_desc { u64 modifier; struct { @@ -150,6 +165,8 @@ static const struct intel_modifier_desc intel_modifiers[] = { .plane_caps = INTEL_PLANE_CAP_TILING_4 | INTEL_PLANE_CAP_CCS_RC_CC, .ccs.cc_planes = BIT(1), + + FORMAT_OVERRIDE(gen12_flat_ccs_cc_formats), }, { .modifier = I915_FORMAT_MOD_4_TILED_DG2_RC_CCS, .display_ver = { 13, 14 }, @@ -399,17 +416,13 @@ bool intel_fb_plane_supports_modifier(struct intel_plane *plane, u64 modifier) static bool format_is_yuv_semiplanar(const struct intel_modifier_desc *md, const struct drm_format_info *info) { - int yuv_planes; - if (!info->is_yuv) return false; - if (plane_caps_contain_any(md->plane_caps, INTEL_PLANE_CAP_CCS_MASK)) - yuv_planes = 4; + if (hweight8(md->ccs.planar_aux_planes) == 2) + return info->num_planes == 4; else - yuv_planes = 2; - - return info->num_planes == yuv_planes; + return info->num_planes == 2; } /** @@ -534,12 +547,13 @@ static unsigned int gen12_ccs_aux_stride(struct intel_framebuffer *fb, int ccs_p int skl_main_to_aux_plane(const struct drm_framebuffer *fb, int main_plane) { + const struct intel_modifier_desc *md = lookup_modifier(fb->modifier); struct drm_i915_private *i915 = to_i915(fb->dev); - if (intel_fb_is_ccs_modifier(fb->modifier)) + if (md->ccs.packed_aux_planes | md->ccs.planar_aux_planes) return main_to_ccs_plane(fb, main_plane); else if (DISPLAY_VER(i915) < 11 && - intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier)) + format_is_yuv_semiplanar(md, fb->format)) return 1; else return 0; diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c index 9a89df9c0243..ed2883409e91 100644 --- a/drivers/gpu/drm/i915/display/skl_universal_plane.c +++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c @@ -1136,7 +1136,9 @@ skl_program_plane_arm(struct intel_plane *plane, intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id), (y << 16) | x); - intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id), aux_dist); + /* FLAT CCS doesn't need to program AUX_DIST */ + if (!HAS_FLAT_CCS(dev_priv)) + intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id), aux_dist); if (DISPLAY_VER(dev_priv) < 11) intel_de_write_fw(dev_priv, PLANE_AUX_OFFSET(pipe, plane_id), @@ -1543,9 +1545,10 @@ static int skl_check_main_surface(struct intel_plane_state *plane_state) /* * CCS AUX surface doesn't have its own x/y offsets, we must make sure - * they match with the main surface x/y offsets. + * they match with the main surface x/y offsets. On DG2 + * there's no aux plane on fb so skip this checking. */ - if (intel_fb_is_ccs_modifier(fb->modifier)) { + if (intel_fb_is_ccs_modifier(fb->modifier) && aux_plane) { while (!skl_check_main_ccs_coordinates(plane_state, x, y, offset, aux_plane)) { if (offset == 0) @@ -1589,6 +1592,8 @@ static int skl_check_nv12_aux_surface(struct intel_plane_state *plane_state) const struct drm_framebuffer *fb = plane_state->hw.fb; unsigned int rotation = plane_state->hw.rotation; int uv_plane = 1; + int ccs_plane = intel_fb_is_ccs_modifier(fb->modifier) ? + skl_main_to_aux_plane(fb, uv_plane) : 0; int max_width = intel_plane_max_width(plane, fb, uv_plane, rotation); int max_height = intel_plane_max_height(plane, fb, uv_plane, rotation); int x = plane_state->uapi.src.x1 >> 17; @@ -1609,8 +1614,7 @@ static int skl_check_nv12_aux_surface(struct intel_plane_state *plane_state) offset = intel_plane_compute_aligned_offset(&x, &y, plane_state, uv_plane); - if (intel_fb_is_ccs_modifier(fb->modifier)) { - int ccs_plane = main_to_ccs_plane(fb, uv_plane); + if (ccs_plane) { u32 aux_offset = plane_state->view.color_plane[ccs_plane].offset; u32 alignment = intel_surf_alignment(fb, uv_plane); From patchwork Thu Dec 9 15:45:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 038AAC433F5 for ; Thu, 9 Dec 2021 17:02:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6966010E7B6; Thu, 9 Dec 2021 16:55:23 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1DF8F891D4; Thu, 9 Dec 2021 15:46:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064809; x=1670600809; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=86iS98JXZ4kPTyKVOxO/shhOj8RyLDAPgBVvLWijt8c=; b=Hyo3zvS60Zx03EfeozCKmOoEE8PHkAMh/HMGXgRgRNT5N2IFPWgQ5M0S LWHctqnngSQWiYieb5KDlGtkTYuEM0CNI+CYgAOaWb5QL5E5WfNpMfcfH mNfbIiyZ/D8XgcQ5Ho2gfR90v3QEsAnyvCNNHuk3q57rspC77l2NNDRva Vae4mblZcqp1ZLD91VWM3ByKTZfJnSxLkrWZ+RoTG/3faElg+gFTS0ztM +wdKSbUusFDUrbMVJ7gxc5fpc2IyeqjucyIrADkgnm/pXht0maXrO68uG IErjHEY5vr2vA5JyLRg1J/ob5nKQcBQm3TzSeF2s1mTHjEDJlSQttMtIo w==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917161" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917161" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:31 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535247" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:27 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:31 +0530 Message-Id: <20211209154533.4084-15-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 14/16] drm/i915/uapi: document behaviour for DG2 64K support X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kenneth Graunke , Slawomir Milczarek , Pekka Paalanen , Hellstrom Thomas , Matthew Auld , Simon Ser , mesa-dev@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Matthew Auld On discrete platforms like DG2, we need to support a minimum page size of 64K when dealing with device local-memory. This is quite tricky for various reasons, so try to document the new implicit uapi for this. v2: Fixed suggestions on formatting [Daniel] Signed-off-by: Matthew Auld Signed-off-by: Ramalingam C cc: Simon Ser cc: Pekka Paalanen Cc: Jordan Justen Cc: Kenneth Graunke Cc: mesa-dev@lists.freedesktop.org Cc: Tony Ye Cc: Slawomir Milczarek --- include/uapi/drm/i915_drm.h | 67 ++++++++++++++++++++++++++++++++++--- 1 file changed, 62 insertions(+), 5 deletions(-) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 5e678917da70..b7441593434c 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1118,10 +1118,16 @@ struct drm_i915_gem_exec_object2 { /** * When the EXEC_OBJECT_PINNED flag is specified this is populated by * the user with the GTT offset at which this object will be pinned. + * * When the I915_EXEC_NO_RELOC flag is specified this must contain the * presumed_offset of the object. + * * During execbuffer2 the kernel populates it with the value of the * current GTT offset of the object, for future presumed_offset writes. + * + * See struct drm_i915_gem_create_ext for the rules when dealing with + * alignment restrictions with I915_MEMORY_CLASS_DEVICE, on devices with + * minimum page sizes, like DG2. */ __u64 offset; @@ -3145,11 +3151,62 @@ struct drm_i915_gem_create_ext { * * The (page-aligned) allocated size for the object will be returned. * - * Note that for some devices we have might have further minimum - * page-size restrictions(larger than 4K), like for device local-memory. - * However in general the final size here should always reflect any - * rounding up, if for example using the I915_GEM_CREATE_EXT_MEMORY_REGIONS - * extension to place the object in device local-memory. + * + * **DG2 64K min page size implications:** + * + * On discrete platforms, starting from DG2, we have to contend with GTT + * page size restrictions when dealing with I915_MEMORY_CLASS_DEVICE + * objects. Specifically the hardware only supports 64K or larger GTT + * page sizes for such memory. The kernel will already ensure that all + * I915_MEMORY_CLASS_DEVICE memory is allocated using 64K or larger page + * sizes underneath. + * + * Note that the returned size here will always reflect any required + * rounding up done by the kernel, i.e 4K will now become 64K on devices + * such as DG2. + * + * **Special DG2 GTT address alignment requirement:** + * + * The GTT alignment will also need be at least 64K for such objects. + * + * Note that due to how the hardware implements 64K GTT page support, we + * have some further complications: + * + * 1) The entire PDE(which covers a 2M virtual address range), must + * contain only 64K PTEs, i.e mixing 4K and 64K PTEs in the same + * PDE is forbidden by the hardware. + * + * 2) We still need to support 4K PTEs for I915_MEMORY_CLASS_SYSTEM + * objects. + * + * To handle the above the kernel implements a memory coloring scheme to + * prevent userspace from mixing I915_MEMORY_CLASS_DEVICE and + * I915_MEMORY_CLASS_SYSTEM objects in the same PDE. If the kernel is + * ever unable to evict the required pages for the given PDE(different + * color) when inserting the object into the GTT then it will simply + * fail the request. + * + * Since userspace needs to manage the GTT address space themselves, + * special care is needed to ensure this doesn't happen. The simplest + * scheme is to simply align and round up all I915_MEMORY_CLASS_DEVICE + * objects to 2M, which avoids any issues here. At the very least this + * is likely needed for objects that can be placed in both + * I915_MEMORY_CLASS_DEVICE and I915_MEMORY_CLASS_SYSTEM, to avoid + * potential issues when the kernel needs to migrate the object behind + * the scenes, since that might also involve evicting other objects. + * + * **To summarise the GTT rules, on platforms like DG2:** + * + * 1) All objects that can be placed in I915_MEMORY_CLASS_DEVICE must + * have 64K alignment. The kernel will reject this otherwise. + * + * 2) All I915_MEMORY_CLASS_DEVICE objects must never be placed in + * the same PDE with other I915_MEMORY_CLASS_SYSTEM objects. The + * kernel will reject this otherwise. + * + * 3) Objects that can be placed in both I915_MEMORY_CLASS_DEVICE and + * I915_MEMORY_CLASS_SYSTEM should probably be aligned and padded out + * to 2M. */ __u64 size; /** From patchwork Thu Dec 9 15:45:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6AFBC433FE for ; Thu, 9 Dec 2021 17:03:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 385A710FA1A; Thu, 9 Dec 2021 16:55:51 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 818B1891DD; Thu, 9 Dec 2021 15:46:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064815; x=1670600815; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CFoqsd0cKLjWQpbJ9ovJS/u4/21gRck4+jUTqov7OHg=; b=dnwlxfz3vGuPGIPQ5+42uwQ3wwCve/fmRQLV2/1Kz0q0O055UygqBAI9 KF6QYDePi9gm7c8z/IopnNoaPOgX2Qd+te/IphNpCDkaqIYfwJQkCxfBF sqPs5PLAWqXmKzKS4SjunN4/v3XBohcOsn0WVbXpK+KibHhlS6fJfojDj +pnxLgski8TdeeNVNt73AEAyOTdJgOCWz3wB6Ejfp8D4+ihx8/K3u4kw2 GkfqNFlOMLG7/y3cBXjHHft0wN8L0YJJAbnyXDy57/3HSuZFCSYSnP62M bPZqQabRBk0h4k0rZ7Q4lU++UsG+kI9+ndn/c2hpS+hJMQs4kdYiCYgte Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917212" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917212" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:35 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535261" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:31 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:32 +0530 Message-Id: <20211209154533.4084-16-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 15/16] drm/i915/Flat-CCS: Document on Flat-CCS memory compression X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kenneth Graunke , Slawomir Milczarek , Pekka Paalanen , Hellstrom Thomas , Matthew Auld , Simon Ser , mesa-dev@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Documents the Flat-CCS feature and kernel handling required along with modifiers used. Signed-off-by: Ramalingam C cc: Simon Ser cc: Pekka Paalanen Cc: Jordan Justen Cc: Kenneth Graunke Cc: mesa-dev@lists.freedesktop.org Cc: Tony Ye Cc: Slawomir Milczarek --- drivers/gpu/drm/i915/gt/intel_migrate.c | 47 +++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 0fb83d0bec91..2d7ea9b6e8fb 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -595,6 +595,53 @@ intel_context_migrate_copy(struct intel_context *ce, return err; } +/** + * DOC: Flat-CCS - Memory compression for Local memory + * + * On Xe-HP and later devices, we use dedicated compression control state (CCS) + * stored in local memory for each surface, to support the 3D and media + * compression formats. + * + * The memory required for the CCS of the entire local memory is 1/256 of the + * local memory size. So before the kernel boot, the required memory is reserved + * for the CCS data and a secure register will be programmed with the CCS base + * address. + * + * Flat CCS data needs to be cleared when a lmem object is allocated. + * And CCS data can be copied in and out of CCS region through + * XY_CTRL_SURF_COPY_BLT. CPU can't access the CCS data directly. + * + * When we exaust the lmem, if the object's placements support smem, then we can + * directly decompress the compressed lmem object into smem and start using it + * from smem itself. + * + * But when we need to swapout the compressed lmem object into a smem region + * though objects' placement doesn't support smem, then we copy the lmem content + * as it is into smem region along with ccs data (using XY_CTRL_SURF_COPY_BLT). + * When the object is referred, lmem content will be swaped in along with + * restoration of the CCS data (using XY_CTRL_SURF_COPY_BLT) at corresponding + * location. + * + * + * Flat-CCS Modifiers for different compression formats + * ---------------------------------------------------- + * + * I915_FORMAT_MOD_F_TILED_DG2_RC_CCS - used to indicate the buffers of Flat CCS + * render compression formats. Though the general layout is same as + * I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS, new hashing/compression algorithm is + * used. Render compression uses 128 byte compression blocks + * + * I915_FORMAT_MOD_F_TILED_DG2_MC_CCS -used to indicate the buffers of Flat CCS + * media compression formats. Though the general layout is same as + * I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS, new hashing/compression algorithm is + * used. Media compression uses 256 byte compression blocks. + * + * I915_FORMAT_MOD_F_TILED_DG2_RC_CCS_CC - used to indicate the buffers of Flat + * CCS clear color render compression formats. Unified compression format for + * clear color render compression. The genral layout is a tiled layout using + * 4Kb tiles i.e Tile4 layout. + */ + static inline u32 *i915_flush_dw(u32 *cmd, u64 dst, u32 flags) { /* Mask the 3 LSB to use the PPGTT address space */ From patchwork Thu Dec 9 15:45:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12667387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 28E23C433EF for ; Thu, 9 Dec 2021 17:00:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 629F110E1ED; Thu, 9 Dec 2021 16:54:11 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0B0A5891DD; Thu, 9 Dec 2021 15:46:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639064818; x=1670600818; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L2Hbm0AXNsu1t3eNcZY304Dflr8gpsGAkaCW3gW7Nec=; b=hjkhN77JbzHVRv8r99ifHlyJAxcW9SyBp5AVQif5fy9OUmVDKKqf+ZLW dT3OqnX9diUqH3ftZ/5AwsQ68BTZgIzxgJsRGeG+OkdT+RUpFXMqLUgOu iVYnGr0RSI8G0fFHg+kVAoPRIib5VQTrvtXpHDROBRl7kmAeCVcN2jQXP y6D07oiowg8/TMJ6RGTFvdPbaRHkh1UWT+auZyReSyncYNYcUc1C2H1bc tSmFiECgvnNHIOES6bE6Zj0zDq7HlfpqK9nNZCBMB1Mhd4pBCzifVcZ3P U61wEPcSJDt2QqvPjIwqwzJOl9oSRYYROqHVOvcJUX40W9rcQS+7XxW1j g==; X-IronPort-AV: E=McAfee;i="6200,9189,10192"; a="298917239" X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="298917239" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:39 -0800 X-IronPort-AV: E=Sophos;i="5.88,192,1635231600"; d="scan'208";a="503535285" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2021 07:46:35 -0800 From: Ramalingam C To: dri-devel , intel-gfx Date: Thu, 9 Dec 2021 21:15:33 +0530 Message-Id: <20211209154533.4084-17-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211209154533.4084-1-ramalingam.c@intel.com> References: <20211209154533.4084-1-ramalingam.c@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 16/16] Doc/gpu/rfc/i915: i915 DG2 uAPI X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kenneth Graunke , Slawomir Milczarek , Daniel Vetter , Pekka Paalanen , Hellstrom Thomas , Matthew Auld , Simon Ser , mesa-dev@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Details of the new features getting added as part of DG2 enabling and their implicit impact on the uAPI. v2: improvised the Flat-CCS documentation [Danvet & CQ] Signed-off-by: Ramalingam C cc: Daniel Vetter cc: Matthew Auld cc: Simon Ser cc: Pekka Paalanen Cc: Jordan Justen Cc: Kenneth Graunke Cc: mesa-dev@lists.freedesktop.org Cc: Tony Ye Cc: Slawomir Milczarek --- Documentation/gpu/rfc/i915_dg2.rst | 32 ++++++++++++++++++++++++++++++ Documentation/gpu/rfc/index.rst | 3 +++ 2 files changed, 35 insertions(+) create mode 100644 Documentation/gpu/rfc/i915_dg2.rst diff --git a/Documentation/gpu/rfc/i915_dg2.rst b/Documentation/gpu/rfc/i915_dg2.rst new file mode 100644 index 000000000000..9d28b1812bc7 --- /dev/null +++ b/Documentation/gpu/rfc/i915_dg2.rst @@ -0,0 +1,32 @@ +==================== +I915 DG2 RFC Section +==================== + +Upstream plan +============= +Plan to upstream the DG2 enabling is: + +* Merge basic HW enabling for DG2 (Still without pciid) +* Merge the 64k support for lmem +* Merge the flat CCS enabling patches +* Add the pciid for DG2 and enable the DG2 in CI + + +64K page support for lmem +========================= +On DG2 hw, local-memory supports minimum GTT page size of 64k only. 4k is not +supported anymore. + +DG2 hw doesn't support the 64k (lmem) and 4k (smem) pages in the same ppgtt +Page table. Refer the struct drm_i915_gem_create_ext for the implication of +handling the 64k page size. + +.. kernel-doc:: include/uapi/drm/i915_drm.h + :functions: drm_i915_gem_create_ext + + +Flat CCS support for lmem +========================= + +.. kernel-doc:: drivers/gpu/drm/i915/gt/intel_migrate.c + :doc: Flat-CCS - Memory compression for Local memory diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 91e93a705230..afb320ed4028 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -20,6 +20,9 @@ host such documentation: i915_gem_lmem.rst +.. toctree:: + i915_dg2.rst + .. toctree:: i915_scheduler.rst