From patchwork Thu Oct 21 14:26:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramalingam C X-Patchwork-Id: 12575317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2891AC433EF for ; Thu, 21 Oct 2021 14:24:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EDC196121E for ; Thu, 21 Oct 2021 14:24:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EDC196121E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B1EE06EC99; Thu, 21 Oct 2021 14:24:20 +0000 (UTC) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E08C6EC9B; Thu, 21 Oct 2021 14:24:12 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10143"; a="216212209" X-IronPort-AV: E=Sophos;i="5.87,170,1631602800"; d="scan'208";a="216212209" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2021 07:24:12 -0700 X-IronPort-AV: E=Sophos;i="5.87,170,1631602800"; d="scan'208";a="495170509" Received: from ramaling-i9x.iind.intel.com ([10.99.66.205]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2021 07:24:09 -0700 From: Ramalingam C To: dri-devel , intel-gfx Cc: Daniel Vetter , CQ Tang , Matthew Auld , lucas.demarchi@intel.com, , Hellstrom Thomas , Ramalingam C , Joonas Lahtinen Subject: [PATCH v2 09/17] drm/i915/xehpsdv: implement memory coloring Date: Thu, 21 Oct 2021 19:56:19 +0530 Message-Id: <20211021142627.31058-10-ramalingam.c@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20211021142627.31058-1-ramalingam.c@intel.com> References: <20211021142627.31058-1-ramalingam.c@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Matthew Auld The basic idea is that each 2M block(page-table) has a color, depending on if the page-table is occupied by LMEM objects(64K) or SMEM objects(4K), where our goal is to prevent mixing 64K and 4K GTT pages in the page-table, which is not supported by the HW. Signed-off-by: Matthew Auld Signed-off-by: Stuart Summers Signed-off-by: Ramalingam C Cc: Joonas Lahtinen Cc: Rodrigo Vivi --- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 16 ++++++++++ drivers/gpu/drm/i915/gt/intel_gtt.h | 6 ++++ drivers/gpu/drm/i915/i915_gem_evict.c | 17 ++++++++++ drivers/gpu/drm/i915/i915_vma.c | 46 +++++++++++++++++++-------- 4 files changed, 71 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index fec0f20f1b93..666745adbe93 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -464,6 +464,19 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt, return idx; } +static void xehpsdv_ppgtt_color_adjust(const struct drm_mm_node *node, + unsigned long color, + u64 *start, + u64 *end) +{ + if (i915_node_color_differs(node, color)) + *start = round_up(*start, SZ_2M); + + node = list_next_entry(node, node_list); + if (i915_node_color_differs(node, color)) + *end = round_down(*end, SZ_2M); +} + static void xehpsdv_ppgtt_insert_huge(struct i915_vma *vma, struct sgt_dma *iter, @@ -901,6 +914,9 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, ppgtt->vm.alloc_scratch_dma = alloc_pt_dma; } + if (HAS_64K_PAGES(gt->i915)) + ppgtt->vm.mm.color_adjust = xehpsdv_ppgtt_color_adjust; + err = gen8_init_scratch(&ppgtt->vm); if (err) goto err_free; diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 20101eef4c95..34696acde342 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -397,6 +397,12 @@ i915_vm_has_cache_coloring(struct i915_address_space *vm) return i915_is_ggtt(vm) && vm->mm.color_adjust; } +static inline bool +i915_vm_has_memory_coloring(struct i915_address_space *vm) +{ + return !i915_is_ggtt(vm) && vm->mm.color_adjust; +} + static inline struct i915_ggtt * i915_vm_to_ggtt(struct i915_address_space *vm) { diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index 2b73ddb11c66..006bf4924c24 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -292,6 +292,13 @@ int i915_gem_evict_for_node(struct i915_address_space *vm, /* Always look at the page afterwards to avoid the end-of-GTT */ end += I915_GTT_PAGE_SIZE; + } else if (i915_vm_has_memory_coloring(vm)) { + /* + * Expand the search the cover the page-table boundries, in + * case we need to flip the color of the page-table(s). + */ + start = round_down(start, SZ_2M); + end = round_up(end, SZ_2M); } GEM_BUG_ON(start >= end); @@ -321,6 +328,16 @@ int i915_gem_evict_for_node(struct i915_address_space *vm, if (node->color == target->color) continue; } + } else if (i915_vm_has_memory_coloring(vm)) { + if (node->start + node->size <= target->start) { + if (node->color == target->color) + continue; + } + + if (node->start >= target->start + target->size) { + if (node->color == target->color) + continue; + } } if (i915_vma_is_pinned(vma)) { diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index c31b4bc8af16..92b124ecc38c 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -585,6 +585,10 @@ bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long color) struct drm_mm_node *node = &vma->node; struct drm_mm_node *other; + /* Only valid to be called on an already inserted vma */ + GEM_BUG_ON(!drm_mm_node_allocated(node)); + GEM_BUG_ON(list_empty(&node->node_list)); + /* * On some machines we have to be careful when putting differing types * of snoopable memory together to avoid the prefetcher crossing memory @@ -592,22 +596,34 @@ bool i915_gem_valid_gtt_space(struct i915_vma *vma, unsigned long color) * these constraints apply and set the drm_mm.color_adjust * appropriately. */ - if (!i915_vm_has_cache_coloring(vma->vm)) - return true; - - /* Only valid to be called on an already inserted vma */ - GEM_BUG_ON(!drm_mm_node_allocated(node)); - GEM_BUG_ON(list_empty(&node->node_list)); + if (i915_vm_has_cache_coloring(vma->vm)) { + other = list_prev_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(other)) + return false; - other = list_prev_entry(node, node_list); - if (i915_node_color_differs(other, color) && - !drm_mm_hole_follows(other)) - return false; + other = list_next_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(node)) + return false; + /* + * On XEHPSDV we need to make sure we are not mixing LMEM and SMEM objects + * in the same page-table, i.e mixing 64K and 4K gtt pages in the same + * page-table. + */ + } else if (i915_vm_has_memory_coloring(vma->vm)) { + other = list_prev_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(other) && + !IS_ALIGNED(other->start + other->size, SZ_2M)) + return false; - other = list_next_entry(node, node_list); - if (i915_node_color_differs(other, color) && - !drm_mm_hole_follows(node)) - return false; + other = list_next_entry(node, node_list); + if (i915_node_color_differs(other, color) && + !drm_mm_hole_follows(node) && + !IS_ALIGNED(other->start, SZ_2M)) + return false; + } return true; } @@ -676,6 +692,8 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags) if (i915_vm_has_cache_coloring(vma->vm)) color = vma->obj->cache_level; + else if (i915_vm_has_memory_coloring(vma->vm)) + color = i915_gem_object_is_lmem(vma->obj); } if (flags & PIN_OFFSET_FIXED) {