From patchwork Thu Jul 1 20:24:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matt Roper X-Patchwork-Id: 12354939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52E67C11F70 for ; Thu, 1 Jul 2021 20:26:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 25BF4613F4 for ; Thu, 1 Jul 2021 20:26:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25BF4613F4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1F09A6EC02; Thu, 1 Jul 2021 20:25:33 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id C52436EBA7; Thu, 1 Jul 2021 20:25:22 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10032"; a="188998661" X-IronPort-AV: E=Sophos;i="5.83,315,1616482800"; d="scan'208";a="188998661" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2021 13:25:21 -0700 X-IronPort-AV: E=Sophos;i="5.83,315,1616482800"; d="scan'208";a="644564489" Received: from mdroper-desk1.fm.intel.com ([10.1.27.134]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2021 13:25:20 -0700 From: Matt Roper To: intel-gfx@lists.freedesktop.org Subject: [PATCH 30/53] drm/i915/dg2: Maintain backward-compatible nested batch behavior Date: Thu, 1 Jul 2021 13:24:04 -0700 Message-Id: <20210701202427.1547543-31-matthew.d.roper@intel.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20210701202427.1547543-1-matthew.d.roper@intel.com> References: <20210701202427.1547543-1-matthew.d.roper@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Harrison , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For tgl+, the per-context setting of MI_MODE[12] determines whether the bits of a nested MI_BATCH_BUFFER_START instruction should be interpreted in the traditional manner or whether they should instead use a new tgl+ meaning that breaks backward compatibility, but allows nesting into 3rd-level batchbuffers. For previous platforms, the hardware default for this register bit is to maintain backward-compatible behavior unless a context intentionally opts into the new behavior; however Xe_HPG flips the hardware default behavior. From a SW perspective, we want to maintain the backward-compatible behavior for userspace, so we'll apply a fake workaround to set it back to the legacy behavior on platforms where the hardware default is to break compatibility. At the moment there is no Linux userspace that utilizes third-level batchbuffers, so this will avoid userspace from needing to make any changes. using the legacy meaning is the correct thing to do. If/when we have userspace consumers that want to utilize third-level batch nesting, we can provide a context parameter to allow them to opt-in. Bspec: 45974, 45718 Cc: John Harrison Signed-off-by: Matt Roper --- drivers/gpu/drm/i915/gt/intel_workarounds.c | 39 +++++++++++++++++++-- drivers/gpu/drm/i915/i915_reg.h | 1 + 2 files changed, 38 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index f97ff2848122..43db766b0672 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -686,6 +686,37 @@ static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine, DG1_HZ_READ_SUPPRESSION_OPTIMIZATION_DISABLE); } +static void fakewa_disable_nestedbb_mode(struct intel_engine_cs *engine, + struct i915_wa_list *wal) +{ + /* + * This is a "fake" workaround defined by software to ensure we + * maintain reliable, backward-compatible behavior for userspace with + * regards to how nested MI_BATCH_BUFFER_START commands are handled. + * + * The per-context setting of MI_MODE[12] determines whether the bits + * of a nested MI_BATCH_BUFFER_START instruction should be interpreted + * in the traditional manner or whether they should instead use a new + * tgl+ meaning that breaks backward compatibility, but allows nesting + * into 3rd-level batchbuffers. When this new capability was first + * added in TGL, it remained off by default unless a context + * intentionally opted in to the new behavior. However Xe_HPG now + * flips this on by default and requires that we explicitly opt out if + * we don't want the new behavior. + * + * From a SW perspective, we want to maintain the backward-compatible + * behavior for userspace, so we'll apply a fake workaround to set it + * back to the legacy behavior on platforms where the hardware default + * is to break compatibility. At the moment there is no Linux + * userspace that utilizes third-level batchbuffers, so this will avoid + * userspace from needing to make any changes. using the legacy + * meaning is the correct thing to do. If/when we have userspace + * consumers that want to utilize third-level batch nesting, we can + * provide a context parameter to allow them to opt-in. + */ + wa_masked_dis(wal, RING_MI_MODE(engine->mmio_base), TGL_NESTED_BB_EN); +} + static void __intel_engine_init_ctx_wa(struct intel_engine_cs *engine, struct i915_wa_list *wal, @@ -693,11 +724,15 @@ __intel_engine_init_ctx_wa(struct intel_engine_cs *engine, { struct drm_i915_private *i915 = engine->i915; + wa_init_start(wal, name, engine->name); + + /* Applies to all engines */ + if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 55)) + fakewa_disable_nestedbb_mode(engine, wal); + if (engine->class != RENDER_CLASS) return; - wa_init_start(wal, name, engine->name); - if (IS_DG1(i915)) dg1_ctx_workarounds_init(engine, wal); else if (GRAPHICS_VER(i915) == 12) diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index b19d102e0a01..35a42df1f2aa 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -2821,6 +2821,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg) #define MI_MODE _MMIO(0x209c) # define VS_TIMER_DISPATCH (1 << 6) # define MI_FLUSH_ENABLE (1 << 12) +# define TGL_NESTED_BB_EN (1 << 12) # define ASYNC_FLIP_PERF_DISABLE (1 << 14) # define MODE_IDLE (1 << 9) # define STOP_RING (1 << 8)