From patchwork Wed Sep 6 11:31:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nirmoy Das X-Patchwork-Id: 13375594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A6D53EB8FA5 for ; Wed, 6 Sep 2023 11:31:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 19D6310E3B0; Wed, 6 Sep 2023 11:31:36 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 53FDE10E3B0 for ; Wed, 6 Sep 2023 11:31:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693999893; x=1725535893; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=URQ6i64gaip9RfBXvFLnzGv10KPD9vc7uGfauWeiMks=; b=auioT62Xt86Z5Zaw/lewDhXZsoeVyuRAleD5bCx6fG2vgqQZ46Yd7DAo lwYFT9Mc9zpX7asr8Rm3wdghOuxTyuQ0TQRXevIxIv5czq6QIcux68TPo 54YDw7HkHgLGY+t8bgO8HD+34J9YcgSKjYl6tVkxaVWKZ0dSoab0yV2nb eT+/CnMlTtOimuKSyC1XFp9inUwOIyAiYBUBduiCYViPDd1zWWZrHfcH4 0KGcVvbFh0sx/qeqP/2DAuINzcpgeqsGmKZg6hjX3iy1ur1ucVjc/VB+e MgCt5wl7im8TNQnLCN4+8aMZwbnwFa4dsSbjQnifY0NN+cZNdagmH7qE4 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="357353587" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="357353587" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="865109429" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="865109429" Received: from nirmoyda-desk.igk.intel.com ([10.102.138.190]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:30 -0700 From: Nirmoy Das To: intel-gfx@lists.freedesktop.org Date: Wed, 6 Sep 2023 13:31:17 +0200 Message-ID: <20230906113121.30472-2-nirmoy.das@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906113121.30472-1-nirmoy.das@intel.com> References: <20230906113121.30472-1-nirmoy.das@intel.com> MIME-Version: 1.0 Organization: Intel Deutschland GmbH, Registered Address: Am Campeon 10, 85579 Neubiberg, Germany, Commercial Register: Amtsgericht Muenchen HRB 186928 Subject: [Intel-gfx] [PATCH 1/5] drm/i915: Lift runtime-pm acquire callbacks out of intel_wakeref.mutex X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Chris Wilson , andi.shyti@intel.com, chris.p.wilson@linux.intel.com, matthew.d.roper@intel.com, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Chris Wilson When runtime pm is first woken, it will synchronously call the registered callbacks for the device and bug. These callback may pull in their own forest of locks, which we do not want to conflate with the intel_wakeref.mutex. A second minor beneft to reducing the coverage of the mutex, is that it will reduce contention for frequent sleeps and wakes (such as when being used for soft-rc6). Signed-off-by: Chris Wilson Signed-off-by: Nirmoy Das Reviewed-by: Andi Shyti --- drivers/gpu/drm/i915/intel_wakeref.c | 43 ++++++++++++++-------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c index 718f2f1b6174..af7b4cb5b4d7 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.c +++ b/drivers/gpu/drm/i915/intel_wakeref.c @@ -10,21 +10,11 @@ #include "intel_wakeref.h" #include "i915_drv.h" -static void rpm_get(struct intel_wakeref *wf) -{ - wf->wakeref = intel_runtime_pm_get(&wf->i915->runtime_pm); -} - -static void rpm_put(struct intel_wakeref *wf) -{ - intel_wakeref_t wakeref = fetch_and_zero(&wf->wakeref); - - intel_runtime_pm_put(&wf->i915->runtime_pm, wakeref); - INTEL_WAKEREF_BUG_ON(!wakeref); -} - int __intel_wakeref_get_first(struct intel_wakeref *wf) { + intel_wakeref_t wakeref = intel_runtime_pm_get(&wf->i915->runtime_pm); + int err = 0; + /* * Treat get/put as different subclasses, as we may need to run * the put callback from under the shrinker and do not want to @@ -32,41 +22,50 @@ int __intel_wakeref_get_first(struct intel_wakeref *wf) * upon acquiring the wakeref. */ mutex_lock_nested(&wf->mutex, SINGLE_DEPTH_NESTING); - if (!atomic_read(&wf->count)) { - int err; - rpm_get(wf); + if (likely(!atomic_read(&wf->count))) { + INTEL_WAKEREF_BUG_ON(wf->wakeref); + wf->wakeref = fetch_and_zero(&wakeref); err = wf->ops->get(wf); if (unlikely(err)) { - rpm_put(wf); - mutex_unlock(&wf->mutex); - return err; + wakeref = xchg(&wf->wakeref, 0); + wake_up_var(&wf->wakeref); + goto unlock; } smp_mb__before_atomic(); /* release wf->count */ } atomic_inc(&wf->count); + INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); + +unlock: mutex_unlock(&wf->mutex); + if (unlikely(wakeref)) + intel_runtime_pm_put(&wf->i915->runtime_pm, wakeref); - INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); - return 0; + return err; } static void ____intel_wakeref_put_last(struct intel_wakeref *wf) { + intel_wakeref_t wakeref = 0; + INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); if (unlikely(!atomic_dec_and_test(&wf->count))) goto unlock; /* ops->put() must reschedule its own release on error/deferral */ if (likely(!wf->ops->put(wf))) { - rpm_put(wf); + INTEL_WAKEREF_BUG_ON(!wf->wakeref); + wakeref = xchg(&wf->wakeref, 0); wake_up_var(&wf->wakeref); } unlock: mutex_unlock(&wf->mutex); + if (wakeref) + intel_runtime_pm_put(&wf->i915->runtime_pm, wakeref); } void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags) From patchwork Wed Sep 6 11:31:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nirmoy Das X-Patchwork-Id: 13375595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E5CACEB8FAF for ; Wed, 6 Sep 2023 11:31:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B725810E3B7; Wed, 6 Sep 2023 11:31:36 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 96E2910E3B0 for ; Wed, 6 Sep 2023 11:31:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693999895; x=1725535895; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GYgLJIFoGocCep/S0S+YNGTkCnwEfcxdEYOMaFvfVtE=; b=dfYU1XQ66zHFdtU+kft/I7gRp3eAIq/zJ+ejRLWd1+k4R1POjVFWwCeH VJNrE40keoS+P0Sk3Li1w317+eY4Bjm0z4LjD/mk7dd4L9yJ9XqGjP/fq kRDDgbO/jytXeb0oJmRtF8F62nvEbFKhkOuO2+OGp3NgSQJwCu5PhRAdH trk5qymCh5W7ogTzhkC7orrUlH2WF0HUZY4JJlbIvc09ys2p2Vyim++Gv 9+k1hKfz5j4Xh/5pcYlNgirNlZk9Zwo+ICek5cZIdG78kS3oShZOFTcqQ YioHYY/GVD/JkkiN1LA8pjUo24S6BO5hUaZHvPby8imdeybkzJS4+oSHh A==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="357353596" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="357353596" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="865109437" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="865109437" Received: from nirmoyda-desk.igk.intel.com ([10.102.138.190]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:32 -0700 From: Nirmoy Das To: intel-gfx@lists.freedesktop.org Date: Wed, 6 Sep 2023 13:31:18 +0200 Message-ID: <20230906113121.30472-3-nirmoy.das@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906113121.30472-1-nirmoy.das@intel.com> References: <20230906113121.30472-1-nirmoy.das@intel.com> MIME-Version: 1.0 Organization: Intel Deutschland GmbH, Registered Address: Am Campeon 10, 85579 Neubiberg, Germany, Commercial Register: Amtsgericht Muenchen HRB 186928 Subject: [Intel-gfx] [PATCH 2/5] drm/i915: Create a kernel context for GGTT updates X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, chris.p.wilson@linux.intel.com, matthew.d.roper@intel.com, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Create a separate kernel context if a platform requires GGTT updates using MI_UPDATE_GTT blitter command. Subsequent patch will introduce methods to update GGTT using this bind context and MI_UPDATE_GTT blitter command. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/i915/gt/intel_engine.h | 2 ++ drivers/gpu/drm/i915/gt/intel_engine_cs.c | 33 +++++++++++++++++++- drivers/gpu/drm/i915/gt/intel_engine_types.h | 3 ++ drivers/gpu/drm/i915/gt/intel_gt.c | 18 +++++++++++ drivers/gpu/drm/i915/gt/intel_gt.h | 2 ++ drivers/gpu/drm/i915/gt/intel_gtt.c | 4 +++ drivers/gpu/drm/i915/gt/intel_gtt.h | 2 ++ 7 files changed, 63 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h index b58c30ac8ef0..40269e4c1e31 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine.h +++ b/drivers/gpu/drm/i915/gt/intel_engine.h @@ -170,6 +170,8 @@ intel_write_status_page(struct intel_engine_cs *engine, int reg, u32 value) #define I915_GEM_HWS_SEQNO 0x40 #define I915_GEM_HWS_SEQNO_ADDR (I915_GEM_HWS_SEQNO * sizeof(u32)) #define I915_GEM_HWS_MIGRATE (0x42 * sizeof(u32)) +#define I915_GEM_HWS_GGTT_BIND 0x46 +#define I915_GEM_HWS_GGTT_BIND_ADDR (I915_GEM_HWS_GGTT_BIND * sizeof(u32)) #define I915_GEM_HWS_PXP 0x60 #define I915_GEM_HWS_PXP_ADDR (I915_GEM_HWS_PXP * sizeof(u32)) #define I915_GEM_HWS_GSC 0x62 diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index dfb69fc977a0..12af594e9164 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -1419,6 +1419,20 @@ void intel_engine_destroy_pinned_context(struct intel_context *ce) intel_context_put(ce); } +static struct intel_context * +create_ggtt_bind_context(struct intel_engine_cs *engine) +{ + static struct lock_class_key kernel; + + /* + * MI_UPDATE_GTT can insert up to 512 PTE entries and there could be multiple + * bind requets at a time so get a bigger ring. + */ + return intel_engine_create_pinned_context(engine, engine->gt->vm, SZ_512K, + I915_GEM_HWS_GGTT_BIND_ADDR, + &kernel, "ggtt_bind_context"); +} + static struct intel_context * create_kernel_context(struct intel_engine_cs *engine) { @@ -1442,7 +1456,7 @@ create_kernel_context(struct intel_engine_cs *engine) */ static int engine_init_common(struct intel_engine_cs *engine) { - struct intel_context *ce; + struct intel_context *ce, *bce = NULL; int ret; engine->set_default_submission(engine); @@ -1458,6 +1472,17 @@ static int engine_init_common(struct intel_engine_cs *engine) ce = create_kernel_context(engine); if (IS_ERR(ce)) return PTR_ERR(ce); + /* + * Create a separate pinned context for GGTT update with blitter engine + * if a platform require such service. MI_UPDATE_GTT works on other + * engines as well but BCS should be less busy engine so pick that for + * GGTT updates. + */ + if (i915_ggtt_require_binder(engine->i915) && engine->id == BCS0) { + bce = create_ggtt_bind_context(engine); + if (IS_ERR(bce)) + return PTR_ERR(bce); + } ret = measure_breadcrumb_dw(ce); if (ret < 0) @@ -1465,11 +1490,14 @@ static int engine_init_common(struct intel_engine_cs *engine) engine->emit_fini_breadcrumb_dw = ret; engine->kernel_context = ce; + engine->bind_context = bce; return 0; err_context: intel_engine_destroy_pinned_context(ce); + if (bce) + intel_engine_destroy_pinned_context(ce); return ret; } @@ -1537,6 +1565,9 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine) if (engine->kernel_context) intel_engine_destroy_pinned_context(engine->kernel_context); + if (engine->bind_context) + intel_engine_destroy_pinned_context(engine->bind_context); + GEM_BUG_ON(!llist_empty(&engine->barrier_tasks)); cleanup_status_page(engine); diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index a7e677598004..a8f527fab0f0 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -416,6 +416,9 @@ struct intel_engine_cs { struct llist_head barrier_tasks; struct intel_context *kernel_context; /* pinned */ + struct intel_context *bind_context; /* pinned, only for BCS0 */ + /* mark the bind context's availability status */ + bool bind_context_ready; /** * pinned_contexts_list: List of pinned contexts. This list is only diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index 449f0b7fc843..cd0ff1597db9 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -1019,3 +1019,21 @@ enum i915_map_type intel_gt_coherent_map_type(struct intel_gt *gt, else return I915_MAP_WC; } + +void intel_gt_bind_context_set_ready(struct intel_gt *gt, bool ready) +{ + struct intel_engine_cs *engine = gt->engine[BCS0]; + + if (engine && engine->bind_context) + engine->bind_context_ready = ready; +} + +bool intel_gt_is_bind_context_ready(struct intel_gt *gt) +{ + struct intel_engine_cs *engine = gt->engine[BCS0]; + + if (engine) + return engine->bind_context_ready; + + return false; +} diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h index 239848bcb2a4..9e70e625cebc 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.h +++ b/drivers/gpu/drm/i915/gt/intel_gt.h @@ -180,4 +180,6 @@ enum i915_map_type intel_gt_coherent_map_type(struct intel_gt *gt, struct drm_i915_gem_object *obj, bool always_coherent); +void intel_gt_bind_context_set_ready(struct intel_gt *gt, bool ready); +bool intel_gt_is_bind_context_ready(struct intel_gt *gt); #endif /* __INTEL_GT_H__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 13944a14ea2d..4c89eb8d9af7 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -21,6 +21,10 @@ #include "intel_gt_regs.h" #include "intel_gtt.h" +bool i915_ggtt_require_binder(struct drm_i915_private *i915) +{ + return false; +} static bool intel_ggtt_update_needs_vtd_wa(struct drm_i915_private *i915) { diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index 4d6296cdbcfd..bb7e8d48a060 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -688,4 +688,6 @@ static inline struct sgt_dma { return (struct sgt_dma){ sg, addr, addr + sg_dma_len(sg) }; } +bool i915_ggtt_require_binder(struct drm_i915_private *i915); + #endif From patchwork Wed Sep 6 11:31:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nirmoy Das X-Patchwork-Id: 13375596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE5F7EB8FB7 for ; Wed, 6 Sep 2023 11:31:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5B48510E5F6; Wed, 6 Sep 2023 11:31:39 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id B2AE310E5F6 for ; Wed, 6 Sep 2023 11:31:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693999897; x=1725535897; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PTJafzD+gdOLYsgr4xHg9IhiDnB3QTVMKopUBeLYqWs=; b=mSbHjyEGBuXwXeIiMHfa25c6mXrjSwrNjLoHhs4DWfrJAt6nzAYT5K0n PZV5vSG74Ha64qAQShfpkZuxm21ducbEP1OS8FMiwBVTQXXZcA1jhD2GU cNGRj0YSbSme0bXfCssuJ07ZpjLHPnsP0QxsFqn0JbZmXA/vflKmVZlf/ Bd3h8KKtJuIKvKFlpQRFIKQRIU2REroLSJOfUo+l8TqqSDPaqX5tpYtQy ieElMZX5BgRt4+GzLW9DIYN1H7bhhAV2i04Wh1ex36cPqW0BTwxCwqCtF IDAXHl3IOIx/5OlYulOf5z/FBoTbMUHtC+atGHAkVJDkYa2PLctuZboZD Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="357353602" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="357353602" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="865109451" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="865109451" Received: from nirmoyda-desk.igk.intel.com ([10.102.138.190]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:35 -0700 From: Nirmoy Das To: intel-gfx@lists.freedesktop.org Date: Wed, 6 Sep 2023 13:31:19 +0200 Message-ID: <20230906113121.30472-4-nirmoy.das@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906113121.30472-1-nirmoy.das@intel.com> References: <20230906113121.30472-1-nirmoy.das@intel.com> MIME-Version: 1.0 Organization: Intel Deutschland GmbH, Registered Address: Am Campeon 10, 85579 Neubiberg, Germany, Commercial Register: Amtsgericht Muenchen HRB 186928 Subject: [Intel-gfx] [PATCH 3/5] drm/i915: Implement __for_each_sgt_daddr_next X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, chris.p.wilson@linux.intel.com, matthew.d.roper@intel.com, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Implement a way to iterate over sgt with pre-initialized sgt_iter state. Signed-off-by: Nirmoy Das --- drivers/gpu/drm/i915/i915_scatterlist.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h index 5a10c1a31183..6cf8a298849f 100644 --- a/drivers/gpu/drm/i915/i915_scatterlist.h +++ b/drivers/gpu/drm/i915/i915_scatterlist.h @@ -91,6 +91,16 @@ static inline struct scatterlist *__sg_next(struct scatterlist *sg) ((__dp) = (__iter).dma + (__iter).curr), (__iter).sgp; \ (((__iter).curr += (__step)) >= (__iter).max) ? \ (__iter) = __sgt_iter(__sg_next((__iter).sgp), true), 0 : 0) +/** + * __for_each_daddr_next - iterates over the device addresses with pre-initialized iterator. + * @__dp: Device address (output) + * @__iter: 'struct sgt_iter' (iterator state, external) + * @__step: step size + */ +#define __for_each_daddr_next(__dp, __iter, __step) \ + for (; ((__dp) = (__iter).dma + (__iter).curr), (__iter).sgp; \ + (((__iter).curr += (__step)) >= (__iter).max) ? \ + (__iter) = __sgt_iter(__sg_next((__iter).sgp), true), 0 : 0) /** * for_each_sgt_page - iterate over the pages of the given sg_table From patchwork Wed Sep 6 11:31:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nirmoy Das X-Patchwork-Id: 13375597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E0A07EB8FB7 for ; Wed, 6 Sep 2023 11:31:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 63BA610E612; Wed, 6 Sep 2023 11:31:42 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5856D10E5F7 for ; Wed, 6 Sep 2023 11:31:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693999900; x=1725535900; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jUIWTAsHKFr1gTxxTl1GKTMn/Shh0yRByaWu7eC/9Jw=; b=fxYBBVLW8ZIRAGTCLhog64cUlBMHwrkl1rFaG7D/CtGHCmj0wPj5Mhh7 oRLV/nW95EA4Dp1ZfZ6YpGTv0d9Aya0OVGgox5xoA3qpkuQHFBeUHwpTJ 63dPnm/v8DCvvgQvLWMTtKu2FlBJK+J6dAkciUksYbQGFgLvFE6S5RFu7 EZcs8IxuwK5NrAqBLYBC3ovfVCqrSF5HSFPUiDo6vJ9ABygsN+F0ce4jA HS73lhtpYixFOSi6hFSIYoNMkme3yZ8OoxvisbOR4uhBjKplp0u7WdyOo k1CqCxdfdsU5h9p84FWYoNVmyIwoFF8jIRGyXtY6C4Q8HqjP6cwc7MDJo w==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="357353615" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="357353615" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="865109473" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="865109473" Received: from nirmoyda-desk.igk.intel.com ([10.102.138.190]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:37 -0700 From: Nirmoy Das To: intel-gfx@lists.freedesktop.org Date: Wed, 6 Sep 2023 13:31:20 +0200 Message-ID: <20230906113121.30472-5-nirmoy.das@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906113121.30472-1-nirmoy.das@intel.com> References: <20230906113121.30472-1-nirmoy.das@intel.com> MIME-Version: 1.0 Organization: Intel Deutschland GmbH, Registered Address: Am Campeon 10, 85579 Neubiberg, Germany, Commercial Register: Amtsgericht Muenchen HRB 186928 Subject: [Intel-gfx] [PATCH 4/5] drm/i915: Implement GGTT update method with MI_UPDATE_GTT X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, chris.p.wilson@linux.intel.com, matthew.d.roper@intel.com, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Implement GGTT update method with blitter command, MI_UPDATE_GTT and install those handlers if a platform requires that. v2: Make sure we hold the GT wakeref and Blitter engine wakeref before we call mutex_lock/intel_context_enter below. When GT/engine are not awake, the intel_context_enter calls into some runtime pm function which can end up with kmalloc/fs_reclaim. But trigger fs_reclaim holding a mutex lock is not allowed because shrinker can also try to hold the same mutex lock. It is a circular lock. So hold the GT/blitter engine wakeref before calling mutex_lock, to fix the circular lock. Signed-off-by: Nirmoy Das Signed-off-by: Oak Zeng --- drivers/gpu/drm/i915/gt/intel_ggtt.c | 235 +++++++++++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_gtt.h | 3 + 2 files changed, 238 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c index dd0ed941441a..b94de7cebfce 100644 --- a/drivers/gpu/drm/i915/gt/intel_ggtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c @@ -15,18 +15,23 @@ #include "display/intel_display.h" #include "gem/i915_gem_lmem.h" +#include "intel_context.h" #include "intel_ggtt_gmch.h" +#include "intel_gpu_commands.h" #include "intel_gt.h" #include "intel_gt_regs.h" #include "intel_pci_config.h" +#include "intel_ring.h" #include "i915_drv.h" #include "i915_pci.h" +#include "i915_request.h" #include "i915_scatterlist.h" #include "i915_utils.h" #include "i915_vgpu.h" #include "intel_gtt.h" #include "gen8_ppgtt.h" +#include "intel_engine_pm.h" static void i915_ggtt_color_adjust(const struct drm_mm_node *node, unsigned long color, @@ -252,6 +257,145 @@ u64 gen8_ggtt_pte_encode(dma_addr_t addr, return pte; } +static bool should_update_ggtt_with_bind(struct i915_ggtt *ggtt) +{ + struct intel_gt *gt = ggtt->vm.gt; + + return intel_gt_is_bind_context_ready(gt); +} + +static struct intel_context *gen8_ggtt_bind_get_ce(struct i915_ggtt *ggtt) +{ + struct intel_context *ce; + struct intel_gt *gt = ggtt->vm.gt; + + if (intel_gt_is_wedged(gt)) + return NULL; + + ce = gt->engine[BCS0]->bind_context; + GEM_BUG_ON(!ce); + + /* + * If the GT is not awake already at this stage then fallback + * to pci based GGTT update otherwise __intel_wakeref_get_first() + * would conflict with fs_reclaim trying to allocate memory while + * doing rpm_resume(). + */ + if (!intel_gt_pm_get_if_awake(gt)) + return NULL; + + intel_engine_pm_get(ce->engine); + + return ce; +} + +static void gen8_ggtt_bind_put_ce(struct intel_context *ce) +{ + intel_engine_pm_put(ce->engine); + intel_gt_pm_put(ce->engine->gt); +} + +static bool gen8_ggtt_bind_ptes(struct i915_ggtt *ggtt, u32 offset, + struct sg_table *pages, u32 num_entries, + const gen8_pte_t pte) +{ + struct i915_sched_attr attr = {}; + struct intel_gt *gt = ggtt->vm.gt; + const gen8_pte_t scratch_pte = ggtt->vm.scratch[0]->encode; + struct sgt_iter iter; + struct i915_request *rq; + struct intel_context *ce; + u32 *cs; + + if (!num_entries) + return true; + + ce = gen8_ggtt_bind_get_ce(ggtt); + if (!ce) + return false; + + if (pages) + iter = __sgt_iter(pages->sgl, true); + + while (num_entries) { + int count = 0; + dma_addr_t addr; + /* + * MI_UPDATE_GTT can update 512 entries in a single command but + * that end up with engine reset, 511 works. + */ + u32 n_ptes = min_t(u32, 511, num_entries); + + if (mutex_lock_interruptible(&ce->timeline->mutex)) + goto put_ce; + + intel_context_enter(ce); + rq = __i915_request_create(ce, GFP_NOWAIT | GFP_ATOMIC); + intel_context_exit(ce); + if (IS_ERR(rq)) { + GT_TRACE(gt, "Failed to get bind request\n"); + mutex_unlock(&ce->timeline->mutex); + goto put_ce; + } + + cs = intel_ring_begin(rq, 2 * n_ptes + 2); + if (IS_ERR(cs)) { + GT_TRACE(gt, "Failed to ring space for GGTT bind\n"); + i915_request_set_error_once(rq, PTR_ERR(cs)); + /* once a request is created, it must be queued */ + goto queue_err_rq; + } + + *cs++ = MI_UPDATE_GTT | (2 * n_ptes); + *cs++ = offset << 12; + + if (pages) { + for_each_sgt_daddr_next(addr, iter) { + if (count == n_ptes) + break; + *cs++ = lower_32_bits(pte | addr); + *cs++ = upper_32_bits(pte | addr); + count++; + } + /* fill remaining with scratch pte, if any */ + if (count < n_ptes) { + memset64((u64 *)cs, scratch_pte, + n_ptes - count); + cs += (n_ptes - count) * 2; + } + } else { + memset64((u64 *)cs, pte, n_ptes); + cs += n_ptes * 2; + } + + intel_ring_advance(rq, cs); +queue_err_rq: + i915_request_get(rq); + __i915_request_commit(rq); + __i915_request_queue(rq, &attr); + + mutex_unlock(&ce->timeline->mutex); + /* This will break if the request is complete or after engine reset */ + i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT); + if (rq->fence.error) + goto err_rq; + + i915_request_put(rq); + + num_entries -= n_ptes; + offset += n_ptes; + } + + gen8_ggtt_bind_put_ce(ce); + return true; + +err_rq: + i915_request_put(rq); +put_ce: + gen8_ggtt_bind_put_ce(ce); + return false; +} + static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte) { writeq(pte, addr); @@ -272,6 +416,21 @@ static void gen8_ggtt_insert_page(struct i915_address_space *vm, ggtt->invalidate(ggtt); } +static void gen8_ggtt_insert_page_bind(struct i915_address_space *vm, + dma_addr_t addr, u64 offset, + unsigned int pat_index, u32 flags) +{ + struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm); + gen8_pte_t pte; + + pte = ggtt->vm.pte_encode(addr, pat_index, flags); + if (should_update_ggtt_with_bind(i915_vm_to_ggtt(vm)) && + gen8_ggtt_bind_ptes(ggtt, offset, NULL, 1, pte)) + return ggtt->invalidate(ggtt); + + gen8_ggtt_insert_page(vm, addr, offset, pat_index, flags); +} + static void gen8_ggtt_insert_entries(struct i915_address_space *vm, struct i915_vma_resource *vma_res, unsigned int pat_index, @@ -311,6 +470,50 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm, ggtt->invalidate(ggtt); } +static bool __gen8_ggtt_insert_entries_bind(struct i915_address_space *vm, + struct i915_vma_resource *vma_res, + unsigned int pat_index, u32 flags) +{ + struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm); + gen8_pte_t scratch_pte = vm->scratch[0]->encode; + gen8_pte_t pte_encode; + u64 start, end; + + pte_encode = ggtt->vm.pte_encode(0, pat_index, flags); + start = (vma_res->start - vma_res->guard) / I915_GTT_PAGE_SIZE; + end = start + vma_res->guard / I915_GTT_PAGE_SIZE; + if (!gen8_ggtt_bind_ptes(ggtt, start, NULL, end - start, scratch_pte)) + goto err; + + start = end; + end += (vma_res->node_size + vma_res->guard) / I915_GTT_PAGE_SIZE; + if (!gen8_ggtt_bind_ptes(ggtt, start, vma_res->bi.pages, + vma_res->node_size / I915_GTT_PAGE_SIZE, pte_encode)) + goto err; + + start += vma_res->node_size / I915_GTT_PAGE_SIZE; + if (!gen8_ggtt_bind_ptes(ggtt, start, NULL, end - start, scratch_pte)) + goto err; + + return true; + +err: + return false; +} + +static void gen8_ggtt_insert_entries_bind(struct i915_address_space *vm, + struct i915_vma_resource *vma_res, + unsigned int pat_index, u32 flags) +{ + struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm); + + if (should_update_ggtt_with_bind(i915_vm_to_ggtt(vm)) && + __gen8_ggtt_insert_entries_bind(vm, vma_res, pat_index, flags)) + return ggtt->invalidate(ggtt); + + gen8_ggtt_insert_entries(vm, vma_res, pat_index, flags); +} + static void gen8_ggtt_clear_range(struct i915_address_space *vm, u64 start, u64 length) { @@ -332,6 +535,27 @@ static void gen8_ggtt_clear_range(struct i915_address_space *vm, gen8_set_pte(>t_base[i], scratch_pte); } +static void gen8_ggtt_scratch_range_bind(struct i915_address_space *vm, + u64 start, u64 length) +{ + struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm); + unsigned int first_entry = start / I915_GTT_PAGE_SIZE; + unsigned int num_entries = length / I915_GTT_PAGE_SIZE; + const gen8_pte_t scratch_pte = vm->scratch[0]->encode; + const int max_entries = ggtt_total_entries(ggtt) - first_entry; + + if (WARN(num_entries > max_entries, + "First entry = %d; Num entries = %d (max=%d)\n", + first_entry, num_entries, max_entries)) + num_entries = max_entries; + + if (should_update_ggtt_with_bind(ggtt) && gen8_ggtt_bind_ptes(ggtt, first_entry, + NULL, num_entries, scratch_pte)) + return ggtt->invalidate(ggtt); + + gen8_ggtt_clear_range(vm, start, length); +} + static void gen6_ggtt_insert_page(struct i915_address_space *vm, dma_addr_t addr, u64 offset, @@ -997,6 +1221,17 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt) I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND; } + if (i915_ggtt_require_binder(i915)) { + ggtt->vm.scratch_range = gen8_ggtt_scratch_range_bind; + ggtt->vm.insert_page = gen8_ggtt_insert_page_bind; + ggtt->vm.insert_entries = gen8_ggtt_insert_entries_bind; + /* + * On GPU is hung, we might bind VMAs for error capture. + * Fallback to CPU GGTT updates in that case. + */ + ggtt->vm.raw_insert_page = gen8_ggtt_insert_page; + } + if (intel_uc_wants_guc(&ggtt->vm.gt->uc)) ggtt->invalidate = guc_ggtt_invalidate; else diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h index bb7e8d48a060..49835a86afd7 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.h +++ b/drivers/gpu/drm/i915/gt/intel_gtt.h @@ -171,6 +171,9 @@ struct intel_gt; #define for_each_sgt_daddr(__dp, __iter, __sgt) \ __for_each_sgt_daddr(__dp, __iter, __sgt, I915_GTT_PAGE_SIZE) +#define for_each_sgt_daddr_next(__dp, __iter) \ + __for_each_daddr_next(__dp, __iter, I915_GTT_PAGE_SIZE) + struct i915_page_table { struct drm_i915_gem_object *base; union { From patchwork Wed Sep 6 11:31:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nirmoy Das X-Patchwork-Id: 13375598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F5D3EB8FB7 for ; Wed, 6 Sep 2023 11:31:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C3FE010E610; Wed, 6 Sep 2023 11:31:45 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id B23CF10E613 for ; Wed, 6 Sep 2023 11:31:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693999902; x=1725535902; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XfliqhIWystRr2XbN/v7C5xPhXR/wAiob+CZOO4u/Ro=; b=kLq7EePDvU5CiJgcP2STKaJN8VehWy55Ro8JTltdX+HoGIopSA1WXUMz HvYqfC5AG3UhSimuq3jjijuI2P+XHklgHrPAARwbWPlS/ynP4Ycg/CY9J i2Yn48I9D2qFfnzeCyxPtz4pENxLeuEmlqLsIcT7oDPah1BG8kOg247lg uas3xfTrYtQZgSWyFiCxvIG/MVZkKPrBr8YMYb0lJfnODtknKFIIu3rmK u6ZEXVXUKBG0pSWWcgsVARXa54wDSYZLzFsiK+SP7/n5zUhiKrzueuY6S 7Iqc50Jt7q3eefscUiE3KSNxZ4n9xzIxiNaoh1DUhQLj4CqTKxISv9RGy A==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="357353627" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="357353627" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="865109488" X-IronPort-AV: E=Sophos;i="6.02,231,1688454000"; d="scan'208";a="865109488" Received: from nirmoyda-desk.igk.intel.com ([10.102.138.190]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2023 04:31:40 -0700 From: Nirmoy Das To: intel-gfx@lists.freedesktop.org Date: Wed, 6 Sep 2023 13:31:21 +0200 Message-ID: <20230906113121.30472-6-nirmoy.das@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230906113121.30472-1-nirmoy.das@intel.com> References: <20230906113121.30472-1-nirmoy.das@intel.com> MIME-Version: 1.0 Organization: Intel Deutschland GmbH, Registered Address: Am Campeon 10, 85579 Neubiberg, Germany, Commercial Register: Amtsgericht Muenchen HRB 186928 Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Enable GGTT updates with binder in MTL X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andi.shyti@intel.com, chris.p.wilson@linux.intel.com, matthew.d.roper@intel.com, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" MTL can hang because of a HW bug while parallel reading/writing from/to LMEM/GTTMMADR BAR so try to reduce GGTT update related pci transactions with blitter command as recommended for Wa_13010847436 and Wa_14019519902. To issue gpu commands, the driver must be primed to receive requests. Maintain binder-based GGTT update disablement until driver probing completes. Moreover, implement a temporary disablement of blitter prior to entering suspend, followed by re-enablement post-resume. This is acceptable as those transition periods are mostly single threaded. v2: Disable GGTT blitter prior to runtime suspend and re-enable after runtime resume. v3: Remove above as we check if the GT is awake. Signed-off-by: Nirmoy Das Signed-off-by: Oak Zeng --- drivers/gpu/drm/i915/gt/intel_gtt.c | 3 ++- drivers/gpu/drm/i915/i915_driver.c | 5 +++++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 4c89eb8d9af7..4fbed27ef0ec 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -23,7 +23,8 @@ bool i915_ggtt_require_binder(struct drm_i915_private *i915) { - return false; + /* Wa_13010847436 & Wa_14019519902 */ + return MEDIA_VER_FULL(i915) == IP_VER(13, 0); } static bool intel_ggtt_update_needs_vtd_wa(struct drm_i915_private *i915) diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index f8dbee7a5af7..8cc289acdb39 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -815,6 +815,7 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent) i915_welcome_messages(i915); i915->do_release = true; + intel_gt_bind_context_set_ready(to_gt(i915), true); return 0; @@ -855,6 +856,7 @@ void i915_driver_remove(struct drm_i915_private *i915) { intel_wakeref_t wakeref; + intel_gt_bind_context_set_ready(to_gt(i915), false); wakeref = intel_runtime_pm_get(&i915->runtime_pm); i915_driver_unregister(i915); @@ -1077,6 +1079,8 @@ static int i915_drm_suspend(struct drm_device *dev) struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); pci_power_t opregion_target_state; + intel_gt_bind_context_set_ready(to_gt(dev_priv), false); + disable_rpm_wakeref_asserts(&dev_priv->runtime_pm); /* We do a lot of poking in a lot of registers, make sure they work @@ -1264,6 +1268,7 @@ static int i915_drm_resume(struct drm_device *dev) intel_gvt_resume(dev_priv); enable_rpm_wakeref_asserts(&dev_priv->runtime_pm); + intel_gt_bind_context_set_ready(to_gt(dev_priv), true); return 0; }