From patchwork Mon Oct 23 20:21:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13433487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2957C25B49 for ; Mon, 23 Oct 2023 20:22:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1304A10E249; Mon, 23 Oct 2023 20:22:05 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6369E10E242 for ; Mon, 23 Oct 2023 20:22:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698092520; x=1729628520; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=RlmScj/GGStR3E43FR7UK+fm/NeWZn4Gkb0Knkn1A/Y=; b=IuEDgmNQ6AcCRtmOctUnpvPCpTmegP8cEcPJnHwebSdL5kSKK97T3PdO fPnjiOw/sdiWGE+NZVzO66ZjgmzP2GXabvMibUQ3Bj5bzGB+ZLJcXlNfE Cfmlu/xo007/rWQ1JHGYeqa12xkwF4UN3Njcxvs71+rm8v10olgiJlhtL nZHndik6kLoToGw0ECkzAiROHzOoAkpOC3jxjPrV4/0Hf/dSa7/zAkb4T RlrCsXwm986JQelSdhDv5jn6mIlegXOoT1rPxYiwvC+nV+ix30OtzMnQu luuwiS07Nv61vdI6alGEDqyIW+ESZ+VGfny7Nt+v8c8JrEobzDRDPbfEn Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="371989464" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="371989464" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734796470" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="734796470" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:21:58 -0700 From: Andrzej Hajda Date: Mon, 23 Oct 2023 22:21:45 +0200 MIME-Version: 1.0 Message-Id: <20231023-wabb-v4-1-f75dec962b7d@intel.com> References: <20231023-wabb-v4-0-f75dec962b7d@intel.com> In-Reply-To: <20231023-wabb-v4-0-f75dec962b7d@intel.com> To: intel-gfx@lists.freedesktop.org X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=openpgp-sha256; l=2667; i=andrzej.hajda@intel.com; h=from:subject:message-id; bh=RlmScj/GGStR3E43FR7UK+fm/NeWZn4Gkb0Knkn1A/Y=; b=owEB7QES/pANAwAKASNispPeEP3XAcsmYgBlNtXiKhho7b+ndBwTVtaH5y6ZwMulUTK/P7T6izGk CBCmJU+JAbMEAAEKAB0WIQT8qEQxNN2/XeF/A00jYrKT3hD91wUCZTbV4gAKCRAjYrKT3hD919xMC/ kBPNHDptNFdVX0Yv5ekCNF2SU0KpAB4Ebbk1dOA8NafJzJoh5Sy4pnvK9DSvykLvbSBgFvWpHslas5 FWRY9Hvy98TIHmmAJKfRdOlbKjvHIbQybA7FqJ7L5eqhGuOSh5mJT8Rm433AVrPzHIduEfLDieiaY9 Zz93fegYuAllyAk8bop/b6gGCa0YDCspXcWBHho0bi+gdrgVi/NOFT2NcdS4VemyvkTc79fGSGb2JB GPWA+rFw1Ht6VlnWhb9f+ADXyPH3gwGqinsv/E9mHsLkHRCNoUQxi+NXo6i+d5Bv4LbUqWyP44lTLa /pOKzPD9uKDQyZO+DV5wkGzXPwmhaE9elyihZsqmJ8IzqxGxVSGX2I0Ebhx9rG3JyVTuaJrK9n0u7N dIW0RocWHqmayYjdmonwlMBn8snnwZ4WHLWUmOaEZI4/EQDc2HfFYcwss8Ty1QRGcvoYUV9+bMoMPt Tc8tEqL9Qp4Q8I4USyaM5Avzau/nFZdYWRL9DKSBkDgLA= X-Developer-Key: i=andrzej.hajda@intel.com; a=openpgp; fpr=FCA8443134DDBF5DE17F034D2362B293DE10FDD7 Subject: [Intel-gfx] [PATCH v4 1/4] drm/i915: Reserve some kernel space per vm X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jonathan Cavitt , Andrzej Hajda , Chris Wilson Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Reserve one page in each vm for kernel space to use for things such as workarounds. v2: use real memory, do not decrease vm.total v4: reserve only one page and explain flag Suggested-by: Chris Wilson Signed-off-by: Andrzej Hajda Reviewed-by: Jonathan Cavitt Reviewed-by: Nirmoy Das Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 38 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_gtt.h | 1 + 2 files changed, 39 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index 9895e18df0435a..1ac619a02a8567 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -5,6 +5,7 @@ #include +#include "gem/i915_gem_internal.h" #include "gem/i915_gem_lmem.h" #include "gen8_ppgtt.h" @@ -950,6 +951,39 @@ gen8_alloc_top_pd(struct i915_address_space *vm) return ERR_PTR(err); } +static int gen8_init_rsvd(struct i915_address_space *vm) +{ + struct drm_i915_private *i915 = vm->i915; + struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int ret; + + /* The memory will be used only by GPU. */ + obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, + I915_BO_ALLOC_VOLATILE | + I915_BO_ALLOC_GPU_ONLY); + if (IS_ERR(obj)) + obj = i915_gem_object_create_internal(i915, PAGE_SIZE); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + vma = i915_vma_instance(obj, vm, NULL); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto unref; + } + + ret = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_HIGH); + if (ret) + goto unref; + + vm->rsvd = i915_vma_make_unshrinkable(vma); + +unref: + i915_gem_object_put(obj); + return ret; +} + /* * GEN8 legacy ppgtt programming is accomplished through a max 4 PDP registers * with a net effect resembling a 2-level page table in normal x86 terms. Each @@ -1031,6 +1065,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt, if (intel_vgpu_active(gt->i915)) gen8_ppgtt_notify_vgt(ppgtt, true); + err = gen8_init_rsvd(&ppgtt->vm); + if (err) + goto err_put; + return ppgtt; err_put: diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index b471edac269920..5ac079e5f12f67 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -249,6 +249,7 @@ struct i915_address_space { struct work_struct release_work; struct drm_mm mm; + struct i915_vma *rsvd; struct intel_gt *gt; struct drm_i915_private *i915; struct device *dma; From patchwork Mon Oct 23 20:21:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13433486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 34548C25B47 for ; Mon, 23 Oct 2023 20:22:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8FF7F10E24D; Mon, 23 Oct 2023 20:22:04 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3E24A10E249 for ; Mon, 23 Oct 2023 20:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698092522; x=1729628522; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=MFRiXHzcSBvGKP8CiyS5i8r94uLoVerhGILKEcq0Pr8=; b=RzeD3VR1sRi+PlFHDnBSDHmv2yQ+S7BmdtlbwTVdlYWw077pwRV2wYon L3UjpfvkESnagcvH0cyS/+h3YHejqCW+O5j+fGYhpP3uJChSCrwReiV2A yZuC/+nsFWSRxpZn66cluA/JuMWJvdTHfpm8nFi7OFyDwvWnnb2ugY+04 4a8U+5mdwqO0c1LuunalEn5FXtjmCL9jNrgChgRjqC3q/mWycqgBFOgJv swtLuBeVdhrHglyMK2c8gg3YFjh05JZplnfFniIzmG9q+rF/+uKtVuOoo 2tYR67hVixvplKF50yscCN52oEuUTckngfh256J1VGkPc/hf9Q6fGXc7J g==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="371989467" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="371989467" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734796489" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="734796489" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:00 -0700 From: Andrzej Hajda Date: Mon, 23 Oct 2023 22:21:46 +0200 MIME-Version: 1.0 Message-Id: <20231023-wabb-v4-2-f75dec962b7d@intel.com> References: <20231023-wabb-v4-0-f75dec962b7d@intel.com> In-Reply-To: <20231023-wabb-v4-0-f75dec962b7d@intel.com> To: intel-gfx@lists.freedesktop.org X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=openpgp-sha256; l=6679; i=andrzej.hajda@intel.com; h=from:subject:message-id; bh=MFRiXHzcSBvGKP8CiyS5i8r94uLoVerhGILKEcq0Pr8=; b=owEB7QES/pANAwAKASNispPeEP3XAcsmYgBlNtXjAKmwlUSiJ4msbamXwAGEu019PnaD6vHT0vSw kTGXsXuJAbMEAAEKAB0WIQT8qEQxNN2/XeF/A00jYrKT3hD91wUCZTbV4wAKCRAjYrKT3hD911I6C/ 46RdqgeCGxb+iK55Z0tnGFrpcg1BS7o8VVuOi/7wagw1favQGw9FFQujzfrfFk883AX1QxeR+omd0C cXdBZeu1pYMLP8GNJ2E46P56dKZXv1zF8UZ70sX5PNlZDMYGSOLvcpzvj95oACfqIANZBK82YLeapP VfAJF6ZBNNTvCNscOaL/H52KDvRWh98Un5RIgngD0D1iwVavBveo+YPXlISuy3MTRNyzkE3UKhT6Nz h4FXmdejjtOw29JwuIplk9dDACBQVpGKE64SaR3bcQDfQWrymWdz9jJjARQVu/nc9qUCTBDsIPf+Zk uEdy0Z/U2eD0oEcPXM0BqFU7EZiWfb+ymBei8wdGrMlcN+MKo9aoGG+Bd8Pm6k2EQ/fLi7F6jnFmz6 5YB6vCq0ApK8WMWwstcdY/51W4P/PAvZ04sGewsWsF2/5S3GN5zsYUPu/58NQmU6Svw1KPY2AtVWRL 5MF6NDqf77ULGYq/vusrqLoiyTlEI2JUMhz84yAw11GYQ= X-Developer-Key: i=andrzej.hajda@intel.com; a=openpgp; fpr=FCA8443134DDBF5DE17F034D2362B293DE10FDD7 Subject: [Intel-gfx] [PATCH v4 2/4] drm/i915: Add WABB blit for Wa_16018031267 / Wa_16018063123 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jonathan Cavitt , Andrzej Hajda , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Apply WABB blit for Wa_16018031267 / Wa_16018063123. v3: drop unused enum definition v4: move selftest to separate patch, use wa only on BCS0. Co-developed-by: Nirmoy Das Co-developed-by: Jonathan Cavitt Signed-off-by: Andrzej Hajda Signed-off-by: Nirmoy Das Signed-off-by: Jonathan Cavitt Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 + drivers/gpu/drm/i915/gt/intel_gt.h | 4 ++ drivers/gpu/drm/i915/gt/intel_lrc.c | 100 +++++++++++++++++++++++++++- 3 files changed, 104 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h b/drivers/gpu/drm/i915/gt/intel_engine_regs.h index fdd4ddd3a978a2..b8618ee3e3041a 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h @@ -118,6 +118,9 @@ #define CCID_EXTENDED_STATE_RESTORE BIT(2) #define CCID_EXTENDED_STATE_SAVE BIT(3) #define RING_BB_PER_CTX_PTR(base) _MMIO((base) + 0x1c0) /* gen8+ */ +#define PER_CTX_BB_FORCE BIT(2) +#define PER_CTX_BB_VALID BIT(0) + #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /* gen8+ */ #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */ #define ECOSKPD(base) _MMIO((base) + 0x1d0) diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h index 970bedf6b78a7b..9ffdb05e231e21 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.h +++ b/drivers/gpu/drm/i915/gt/intel_gt.h @@ -82,6 +82,10 @@ struct drm_printer; ##__VA_ARGS__); \ } while (0) +#define NEEDS_FASTCOLOR_BLT_WABB(engine) ( \ + IS_GFX_GT_IP_RANGE(engine->gt, IP_VER(12, 55), IP_VER(12, 71)) && \ + engine->class == COPY_ENGINE_CLASS && engine->instance == 0) + static inline bool gt_is_root(struct intel_gt *gt) { return !gt->info.id; diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index eaf66d90316655..96ef901113eae9 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -828,6 +828,18 @@ lrc_ring_indirect_offset_default(const struct intel_engine_cs *engine) return 0; } +static void +lrc_setup_bb_per_ctx(u32 *regs, + const struct intel_engine_cs *engine, + u32 ctx_bb_ggtt_addr) +{ + GEM_BUG_ON(lrc_ring_wa_bb_per_ctx(engine) == -1); + regs[lrc_ring_wa_bb_per_ctx(engine) + 1] = + ctx_bb_ggtt_addr | + PER_CTX_BB_FORCE | + PER_CTX_BB_VALID; +} + static void lrc_setup_indirect_ctx(u32 *regs, const struct intel_engine_cs *engine, @@ -1020,7 +1032,13 @@ static u32 context_wa_bb_offset(const struct intel_context *ce) return PAGE_SIZE * ce->wa_bb_page; } -static u32 *context_indirect_bb(const struct intel_context *ce) +/* + * per_ctx below determines which WABB section is used. + * When true, the function returns the location of the + * PER_CTX_BB. When false, the function returns the + * location of the INDIRECT_CTX. + */ +static u32 *context_wabb(const struct intel_context *ce, bool per_ctx) { void *ptr; @@ -1029,6 +1047,7 @@ static u32 *context_indirect_bb(const struct intel_context *ce) ptr = ce->lrc_reg_state; ptr -= LRC_STATE_OFFSET; /* back to start of context image */ ptr += context_wa_bb_offset(ce); + ptr += per_ctx ? PAGE_SIZE : 0; return ptr; } @@ -1105,7 +1124,8 @@ __lrc_alloc_state(struct intel_context *ce, struct intel_engine_cs *engine) if (GRAPHICS_VER(engine->i915) >= 12) { ce->wa_bb_page = context_size / PAGE_SIZE; - context_size += PAGE_SIZE; + /* INDIRECT_CTX and PER_CTX_BB need separate pages. */ + context_size += PAGE_SIZE * 2; } if (intel_context_is_parent(ce) && intel_engine_uses_guc(engine)) { @@ -1407,12 +1427,85 @@ gen12_emit_indirect_ctx_xcs(const struct intel_context *ce, u32 *cs) return gen12_emit_aux_table_inv(ce->engine, cs); } +static u32 *xehp_emit_fastcolor_blt_wabb(const struct intel_context *ce, u32 *cs) +{ + struct intel_gt *gt = ce->engine->gt; + int mocs = gt->mocs.uc_index << 1; + + /** + * Wa_16018031267 / Wa_16018063123 requires that SW forces the + * main copy engine arbitration into round robin mode. We + * additionally need to submit the following WABB blt command + * to produce 4 subblits with each subblit generating 0 byte + * write requests as WABB: + * + * XY_FASTCOLOR_BLT + * BG0 -> 5100000E + * BG1 -> 0000003F (Dest pitch) + * BG2 -> 00000000 (X1, Y1) = (0, 0) + * BG3 -> 00040001 (X2, Y2) = (1, 4) + * BG4 -> scratch + * BG5 -> scratch + * BG6-12 -> 00000000 + * BG13 -> 20004004 (Surf. Width= 2,Surf. Height = 5 ) + * BG14 -> 00000010 (Qpitch = 4) + * BG15 -> 00000000 + */ + *cs++ = XY_FAST_COLOR_BLT_CMD | (16 - 2); + *cs++ = FIELD_PREP(XY_FAST_COLOR_BLT_MOCS_MASK, mocs) | 0x3f; + *cs++ = 0; + *cs++ = 4 << 16 | 1; + *cs++ = lower_32_bits(i915_vma_offset(ce->vm->rsvd)); + *cs++ = upper_32_bits(i915_vma_offset(ce->vm->rsvd)); + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0; + *cs++ = 0x20004004; + *cs++ = 0x10; + *cs++ = 0; + + return cs; +} + +static u32 * +xehp_emit_per_ctx_bb(const struct intel_context *ce, u32 *cs) +{ + /* Wa_16018031267, Wa_16018063123 */ + if (NEEDS_FASTCOLOR_BLT_WABB(ce->engine)) + cs = xehp_emit_fastcolor_blt_wabb(ce, cs); + + return cs; +} + +static void +setup_per_ctx_bb(const struct intel_context *ce, + const struct intel_engine_cs *engine, + u32 *(*emit)(const struct intel_context *, u32 *)) +{ + /* Place PER_CTX_BB on next page after INDIRECT_CTX */ + u32 * const start = context_wabb(ce, true); + u32 *cs; + + cs = emit(ce, start); + + /* PER_CTX_BB must manually terminate */ + *cs++ = MI_BATCH_BUFFER_END; + + GEM_BUG_ON(cs - start > I915_GTT_PAGE_SIZE / sizeof(*cs)); + lrc_setup_bb_per_ctx(ce->lrc_reg_state, engine, + lrc_indirect_bb(ce) + PAGE_SIZE); +} + static void setup_indirect_ctx_bb(const struct intel_context *ce, const struct intel_engine_cs *engine, u32 *(*emit)(const struct intel_context *, u32 *)) { - u32 * const start = context_indirect_bb(ce); + u32 * const start = context_wabb(ce, false); u32 *cs; cs = emit(ce, start); @@ -1511,6 +1604,7 @@ u32 lrc_update_regs(const struct intel_context *ce, /* Mutually exclusive wrt to global indirect bb */ GEM_BUG_ON(engine->wa_ctx.indirect_ctx.size); setup_indirect_ctx_bb(ce, engine, fn); + setup_per_ctx_bb(ce, engine, xehp_emit_per_ctx_bb); } return lrc_descriptor(ce) | CTX_DESC_FORCE_RESTORE; From patchwork Mon Oct 23 20:21:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13433489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06184C25B45 for ; Mon, 23 Oct 2023 20:22:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 341B610E250; Mon, 23 Oct 2023 20:22:10 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5881E10E24B for ; Mon, 23 Oct 2023 20:22:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698092524; x=1729628524; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=97tVtpt7eCIVexm3iZevbFHkakiIZf8nd4Nqh/KDGkY=; b=SYJSG0A2PnPELQ3Ovsbt23M3J+9LxQHv3TXdunt0D+jCnDTWlQbUJr5X 0yGaZJhwmXh63c1xI1C0DyFmaIg+tS2EhKy6LmGmZ/ktJSin/Ur+RXaz4 0jvy9F+g32SlPRsJHzpGtAUZ9Qo9RBcsW5NLWnEQtOhnx4KElwu6BXoVH /6UGz2Ht6duZVQ02TeaMONmokjyIDwro21Y/QAHm4z+8btzrg9xuM+eUf cTunGkBMXQJpK2JKSTKsCieiAZOvl6A30GA0HQxzx1DsQGkey9l1nBJyQ Bozh7wPRgYrg2aj6sL+qvg228mtVO/Eqakm2WiAlx36j4wY03DDqs35J0 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="371989472" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="371989472" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734796510" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="734796510" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:02 -0700 From: Andrzej Hajda Date: Mon, 23 Oct 2023 22:21:47 +0200 MIME-Version: 1.0 Message-Id: <20231023-wabb-v4-3-f75dec962b7d@intel.com> References: <20231023-wabb-v4-0-f75dec962b7d@intel.com> In-Reply-To: <20231023-wabb-v4-0-f75dec962b7d@intel.com> To: intel-gfx@lists.freedesktop.org X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=openpgp-sha256; l=5269; i=andrzej.hajda@intel.com; h=from:subject:message-id; bh=97tVtpt7eCIVexm3iZevbFHkakiIZf8nd4Nqh/KDGkY=; b=owEB7QES/pANAwAKASNispPeEP3XAcsmYgBlNtXj5aQcHy5qgHHxHP8YSXRDRbWi987RCppk+syT L7plkyaJAbMEAAEKAB0WIQT8qEQxNN2/XeF/A00jYrKT3hD91wUCZTbV4wAKCRAjYrKT3hD91xOyC/ wPDxU96ZbgnTgmPPpVhIwmJMhXtsfgNG8r9AxAe2GAeBWhO0YD4rONYdtIIOdzCE2DUOwnKVfducYg 0s1xwY0ZsHSWpLeVFwXZkdojJrMb+pg7heAul/Upu+JDdYkr6C8FA+9RAAIiPQGUefzqONnO1iETgS WDMYSIAo8YvGx/rhk61z9SSYmpLAir6oRMVCHneGXrxIlBswYiEToj2Ty6ndkM1nEjWo7IQPVeSEOQ 3Ya5/sOThXHh811asi+XicqlHbE+P+SGXyuJYog9pK3AgF71r7/0krRjO2d+/hil3WMkEbi9IZ1Bly y+AR4N6o9iX0noJm9LkdF8VTgytW/V1M7d9Z9z4Tgm9fUvKuzVGME/CTKYFILU7FVQ7svPzW1J0hr0 80KjbWZP2MySrFkjmOJXm2tR4KrhF4MmcoPnWStT78ZntsinA3vtuXuxSPSFDUhw/yjkn57uRIkUil LRxjvLMRC9oTJRU/2FhJyMw2Kp5QC5adA6KwxGtV2gj2Y= X-Developer-Key: i=andrzej.hajda@intel.com; a=openpgp; fpr=FCA8443134DDBF5DE17F034D2362B293DE10FDD7 Subject: [Intel-gfx] [PATCH v4 3/4] drm/i915/gt: add selftest to exercise WABB X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jonathan Cavitt , Andrzej Hajda , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Test re-uses logic form indirect ctx BB selftest. Co-developed-by: Nirmoy Das Co-developed-by: Jonathan Cavitt Signed-off-by: Andrzej Hajda Signed-off-by: Nirmoy Das Signed-off-by: Jonathan Cavitt Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 65 ++++++++++++++++++++++++---------- 1 file changed, 47 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c index 5f826b6dcf5d6f..e17b8777d21dc9 100644 --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c @@ -1555,7 +1555,7 @@ static int live_lrc_isolation(void *arg) return err; } -static int indirect_ctx_submit_req(struct intel_context *ce) +static int wabb_ctx_submit_req(struct intel_context *ce) { struct i915_request *rq; int err = 0; @@ -1579,7 +1579,8 @@ static int indirect_ctx_submit_req(struct intel_context *ce) #define CTX_BB_CANARY_INDEX (CTX_BB_CANARY_OFFSET / sizeof(u32)) static u32 * -emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs) +emit_wabb_ctx_canary(const struct intel_context *ce, + u32 *cs, bool per_ctx) { *cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_SRM_LRM_GLOBAL_GTT | @@ -1587,26 +1588,43 @@ emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs) *cs++ = i915_mmio_reg_offset(RING_START(0)); *cs++ = i915_ggtt_offset(ce->state) + context_wa_bb_offset(ce) + - CTX_BB_CANARY_OFFSET; + CTX_BB_CANARY_OFFSET + + (per_ctx ? PAGE_SIZE : 0); *cs++ = 0; return cs; } +static u32 * +emit_indirect_ctx_bb_canary(const struct intel_context *ce, u32 *cs) +{ + return emit_wabb_ctx_canary(ce, cs, false); +} + +static u32 * +emit_per_ctx_bb_canary(const struct intel_context *ce, u32 *cs) +{ + return emit_wabb_ctx_canary(ce, cs, true); +} + static void -indirect_ctx_bb_setup(struct intel_context *ce) +wabb_ctx_setup(struct intel_context *ce, bool per_ctx) { - u32 *cs = context_indirect_bb(ce); + u32 *cs = context_wabb(ce, per_ctx); cs[CTX_BB_CANARY_INDEX] = 0xdeadf00d; - setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary); + if (per_ctx) + setup_per_ctx_bb(ce, ce->engine, emit_per_ctx_bb_canary); + else + setup_indirect_ctx_bb(ce, ce->engine, emit_indirect_ctx_bb_canary); } -static bool check_ring_start(struct intel_context *ce) +static bool check_ring_start(struct intel_context *ce, bool per_ctx) { const u32 * const ctx_bb = (void *)(ce->lrc_reg_state) - - LRC_STATE_OFFSET + context_wa_bb_offset(ce); + LRC_STATE_OFFSET + context_wa_bb_offset(ce) + + (per_ctx ? PAGE_SIZE : 0); if (ctx_bb[CTX_BB_CANARY_INDEX] == ce->lrc_reg_state[CTX_RING_START]) return true; @@ -1618,21 +1636,21 @@ static bool check_ring_start(struct intel_context *ce) return false; } -static int indirect_ctx_bb_check(struct intel_context *ce) +static int wabb_ctx_check(struct intel_context *ce, bool per_ctx) { int err; - err = indirect_ctx_submit_req(ce); + err = wabb_ctx_submit_req(ce); if (err) return err; - if (!check_ring_start(ce)) + if (!check_ring_start(ce, per_ctx)) return -EINVAL; return 0; } -static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine) +static int __lrc_wabb_ctx(struct intel_engine_cs *engine, bool per_ctx) { struct intel_context *a, *b; int err; @@ -1667,14 +1685,14 @@ static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine) * As ring start is restored apriori of starting the indirect ctx bb and * as it will be different for each context, it fits to this purpose. */ - indirect_ctx_bb_setup(a); - indirect_ctx_bb_setup(b); + wabb_ctx_setup(a, per_ctx); + wabb_ctx_setup(b, per_ctx); - err = indirect_ctx_bb_check(a); + err = wabb_ctx_check(a, per_ctx); if (err) goto unpin_b; - err = indirect_ctx_bb_check(b); + err = wabb_ctx_check(b, per_ctx); unpin_b: intel_context_unpin(b); @@ -1688,7 +1706,7 @@ static int __live_lrc_indirect_ctx_bb(struct intel_engine_cs *engine) return err; } -static int live_lrc_indirect_ctx_bb(void *arg) +static int lrc_wabb_ctx(void *arg, bool per_ctx) { struct intel_gt *gt = arg; struct intel_engine_cs *engine; @@ -1697,7 +1715,7 @@ static int live_lrc_indirect_ctx_bb(void *arg) for_each_engine(engine, gt, id) { intel_engine_pm_get(engine); - err = __live_lrc_indirect_ctx_bb(engine); + err = __lrc_wabb_ctx(engine, per_ctx); intel_engine_pm_put(engine); if (igt_flush_test(gt->i915)) @@ -1710,6 +1728,16 @@ static int live_lrc_indirect_ctx_bb(void *arg) return err; } +static int live_lrc_indirect_ctx_bb(void *arg) +{ + return lrc_wabb_ctx(arg, false); +} + +static int live_lrc_per_ctx_bb(void *arg) +{ + return lrc_wabb_ctx(arg, true); +} + static void garbage_reset(struct intel_engine_cs *engine, struct i915_request *rq) { @@ -1947,6 +1975,7 @@ int intel_lrc_live_selftests(struct drm_i915_private *i915) SUBTEST(live_lrc_garbage), SUBTEST(live_pphwsp_runtime), SUBTEST(live_lrc_indirect_ctx_bb), + SUBTEST(live_lrc_per_ctx_bb), }; if (!HAS_LOGICAL_RING_CONTEXTS(i915)) From patchwork Mon Oct 23 20:21:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13433488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7312C25B46 for ; Mon, 23 Oct 2023 20:22:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4DE0110E24B; Mon, 23 Oct 2023 20:22:09 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id BB67810E24B for ; Mon, 23 Oct 2023 20:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698092525; x=1729628525; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=a8dhYNib41lXDgcVFkweQNqAZTcg7pwhGP5KdQD2Wfc=; b=fIdrHOMUPGygLXs2P2OBsjTNqe4k14hW5cr7ll8qY+XBiMlgVAowLRpr NATRRZSjZySKmWf/9FNT4rF0t6x4DdWA/re8fnq4Rf8WTd++34qMeJ5P9 PxMg1L5+JSQ8zulfYI0F9AB2y5eM7viPuq8TTa5mjnolXbOTEZ/oDoMO8 TphPBd5BHQ7C+KhgwjDoWZ0NNBobEykoWybcNu0DNkwKLyaA/aoj9gWB0 kgYVX2l+9XL/IYnWq1DWz1RRNhIyBhREe0IHr/xav/2hu4rrnLllWpHgO UNm9mNs3NRySYgtn1V2dtJpM99nGtLXR0VssH4aosf0UIAaXOnfArPo6f Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="371989474" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="371989474" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734796524" X-IronPort-AV: E=Sophos;i="6.03,246,1694761200"; d="scan'208";a="734796524" Received: from lab-ah.igk.intel.com ([10.102.138.202]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Oct 2023 13:22:03 -0700 From: Andrzej Hajda Date: Mon, 23 Oct 2023 22:21:48 +0200 MIME-Version: 1.0 Message-Id: <20231023-wabb-v4-4-f75dec962b7d@intel.com> References: <20231023-wabb-v4-0-f75dec962b7d@intel.com> In-Reply-To: <20231023-wabb-v4-0-f75dec962b7d@intel.com> To: intel-gfx@lists.freedesktop.org X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=openpgp-sha256; l=1881; i=andrzej.hajda@intel.com; h=from:subject:message-id; bh=nDRbyhSJhTAjBI8lxwyQRTPCFf4uI6ppQz63RvljIaQ=; b=owEB7QES/pANAwAKASNispPeEP3XAcsmYgBlNtXk37WqJ/waAGpmd1Q12GXVBPGtMjpSoPpjG29W XOnOoTKJAbMEAAEKAB0WIQT8qEQxNN2/XeF/A00jYrKT3hD91wUCZTbV5AAKCRAjYrKT3hD9144MDA CtCGn0FLxFV8ypV3WnViSka3jTJL351rO7pTxUALU4shdiB6A4BZFPBTHo7Hv4qnwwjXKvOE7WctIK 6h3gJJFUMGbb6q/UCgnHtsubnX/8WKKXvhKaPzPYMnzW4OEOEFBwstOCYdApTXwf7RNCLEui0remOv e2i0UP9pBn9DK+WjwmRNbQEhYDN00NM72wyoej7yzKUeRwm0DYOW49ccvWe7MD1POamsTyhwRB7Qdw JvhIjaGdPS9wAnn4oK6nhbqptV1c7Zwse0YUuWzWnfjwKLYcoZL9CYnH7BMO6ulWTpigP6eYCSNguR tXfX71BPFJGceMZcOMXIUG4nq2y8o0XRuoE4wUT6Lzz++6jFqTy2eDU1xZC/4N4ADYXKPcd+m8Z98P sVG08A6nK6/u0TISI099BvlEbNu4XZn1IItb6g1aN4mECv3IfFC6x8nDXnT2iFeeeAjd9u2ZhfS35U uf2yF1ljpX+GmjqZG6kOLuM5S88sx4MkDSY2w2JN/bZwg= X-Developer-Key: i=andrzej.hajda@intel.com; a=openpgp; fpr=FCA8443134DDBF5DE17F034D2362B293DE10FDD7 Subject: [Intel-gfx] [PATCH v4 4/4] drm/i915: Set copy engine arbitration for Wa_16018031267 / Wa_16018063123 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jonathan Cavitt , Andrzej Hajda , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Jonathan Cavitt Set copy engine arbitration into round robin mode for part of Wa_16018031267 / Wa_16018063123 mitigation. Signed-off-by: Nirmoy Das Signed-off-by: Jonathan Cavitt Reviewed-by: Andrzej Hajda Reviewed-by: Nirmoy Das Signed-off-by: Andrzej Hajda Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/gt/intel_engine_regs.h | 3 +++ drivers/gpu/drm/i915/gt/intel_workarounds.c | 5 +++++ 2 files changed, 8 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_regs.h b/drivers/gpu/drm/i915/gt/intel_engine_regs.h index b8618ee3e3041a..c0c8c12edea104 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_regs.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_regs.h @@ -124,6 +124,9 @@ #define RING_INDIRECT_CTX(base) _MMIO((base) + 0x1c4) /* gen8+ */ #define RING_INDIRECT_CTX_OFFSET(base) _MMIO((base) + 0x1c8) /* gen8+ */ #define ECOSKPD(base) _MMIO((base) + 0x1d0) +#define XEHP_BLITTER_SCHEDULING_MODE_MASK REG_GENMASK(12, 11) +#define XEHP_BLITTER_ROUND_ROBIN_MODE \ + REG_FIELD_PREP(XEHP_BLITTER_SCHEDULING_MODE_MASK, 1) #define ECO_CONSTANT_BUFFER_SR_DISABLE REG_BIT(4) #define ECO_GATING_CX_ONLY REG_BIT(3) #define GEN6_BLITTER_FBC_NOTIFY REG_BIT(3) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index 192ac0e59afa13..108d9326735910 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -2782,6 +2782,11 @@ xcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) RING_SEMA_WAIT_POLL(engine->mmio_base), 1); } + /* Wa_16018031267, Wa_16018063123 */ + if (NEEDS_FASTCOLOR_BLT_WABB(engine)) + wa_masked_field_set(wal, ECOSKPD(engine->mmio_base), + XEHP_BLITTER_SCHEDULING_MODE_MASK, + XEHP_BLITTER_ROUND_ROBIN_MODE); } static void