From patchwork Tue Jul 20 20:57:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12390219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5C88C07E9B for ; Wed, 21 Jul 2021 07:28:07 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9064E610D2 for ; Wed, 21 Jul 2021 07:28:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9064E610D2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9C0386E848; Wed, 21 Jul 2021 07:28:04 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5C049899FF; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421874" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421874" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906054" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:14 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 01/42] drm/i915/guc: GuC submission squashed into single patch Date: Tue, 20 Jul 2021 13:57:21 -0700 Message-Id: <20210720205802.39610-2-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-Mailman-Approved-At: Wed, 21 Jul 2021 07:27:54 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" https://patchwork.freedesktop.org/series/91840/ Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/gem/i915_gem_context.c | 21 +- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 3 +- drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 6 +- drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 44 +- drivers/gpu/drm/i915/gt/intel_breadcrumbs.h | 16 +- .../gpu/drm/i915/gt/intel_breadcrumbs_types.h | 7 + drivers/gpu/drm/i915/gt/intel_context.c | 50 +- drivers/gpu/drm/i915/gt/intel_context.h | 50 +- drivers/gpu/drm/i915/gt/intel_context_types.h | 63 +- drivers/gpu/drm/i915/gt/intel_engine.h | 57 +- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 182 +- .../gpu/drm/i915/gt/intel_engine_heartbeat.c | 71 +- .../gpu/drm/i915/gt/intel_engine_heartbeat.h | 4 + drivers/gpu/drm/i915/gt/intel_engine_types.h | 13 +- drivers/gpu/drm/i915/gt/intel_engine_user.c | 4 + .../drm/i915/gt/intel_execlists_submission.c | 89 +- .../drm/i915/gt/intel_execlists_submission.h | 4 - drivers/gpu/drm/i915/gt/intel_gt.c | 21 + drivers/gpu/drm/i915/gt/intel_gt.h | 2 + drivers/gpu/drm/i915/gt/intel_gt_pm.c | 6 +- drivers/gpu/drm/i915/gt/intel_gt_requests.c | 21 +- drivers/gpu/drm/i915/gt/intel_gt_requests.h | 9 +- drivers/gpu/drm/i915/gt/intel_lrc_reg.h | 1 - drivers/gpu/drm/i915/gt/intel_reset.c | 50 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 42 + drivers/gpu/drm/i915/gt/intel_rps.c | 4 + drivers/gpu/drm/i915/gt/intel_workarounds.c | 46 +- .../gpu/drm/i915/gt/intel_workarounds_types.h | 1 + drivers/gpu/drm/i915/gt/mock_engine.c | 34 +- drivers/gpu/drm/i915/gt/selftest_context.c | 10 + .../drm/i915/gt/selftest_engine_heartbeat.c | 22 + .../drm/i915/gt/selftest_engine_heartbeat.h | 2 + drivers/gpu/drm/i915/gt/selftest_execlists.c | 12 +- drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 314 +- drivers/gpu/drm/i915/gt/selftest_mocs.c | 50 +- .../gpu/drm/i915/gt/selftest_workarounds.c | 132 +- .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h | 15 + drivers/gpu/drm/i915/gt/uc/intel_guc.c | 82 +- drivers/gpu/drm/i915/gt/uc/intel_guc.h | 101 +- drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c | 463 ++- drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h | 4 + drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 132 +- drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h | 16 +- .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c | 25 +- drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 88 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 2848 +++++++++++++++-- .../gpu/drm/i915/gt/uc/intel_guc_submission.h | 18 +- drivers/gpu/drm/i915/gt/uc/intel_uc.c | 101 +- drivers/gpu/drm/i915/gt/uc/intel_uc.h | 11 + drivers/gpu/drm/i915/i915_debugfs_params.c | 31 + drivers/gpu/drm/i915/i915_gem_evict.c | 1 + drivers/gpu/drm/i915/i915_gpu_error.c | 25 +- drivers/gpu/drm/i915/i915_reg.h | 2 + drivers/gpu/drm/i915/i915_request.c | 173 +- drivers/gpu/drm/i915/i915_request.h | 29 + drivers/gpu/drm/i915/i915_scheduler.c | 16 +- drivers/gpu/drm/i915/i915_scheduler.h | 10 +- drivers/gpu/drm/i915/i915_scheduler_types.h | 22 + drivers/gpu/drm/i915/i915_trace.h | 199 +- drivers/gpu/drm/i915/selftests/i915_request.c | 4 +- .../gpu/drm/i915/selftests/igt_flush_test.c | 2 +- .../gpu/drm/i915/selftests/igt_live_test.c | 2 +- .../i915/selftests/intel_scheduler_helpers.c | 89 + .../i915/selftests/intel_scheduler_helpers.h | 35 + .../gpu/drm/i915/selftests/mock_gem_device.c | 3 +- include/uapi/drm/i915_drm.h | 9 + 67 files changed, 5122 insertions(+), 898 deletions(-) create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c create mode 100644 drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 10b3bb6207ba..ab7679957623 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -280,6 +280,7 @@ i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o i915-$(CONFIG_DRM_I915_SELFTEST) += \ gem/selftests/i915_gem_client_blt.o \ gem/selftests/igt_gem_utils.o \ + selftests/intel_scheduler_helpers.o \ selftests/i915_random.o \ selftests/i915_selftest.o \ selftests/igt_atomic.o \ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 7d6f52d8a801..d87a4c6da5bc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -74,7 +74,6 @@ #include "gt/intel_context_param.h" #include "gt/intel_engine_heartbeat.h" #include "gt/intel_engine_user.h" -#include "gt/intel_execlists_submission.h" /* virtual_engine */ #include "gt/intel_gpu_commands.h" #include "gt/intel_ring.h" @@ -363,9 +362,6 @@ set_proto_ctx_engines_balance(struct i915_user_extension __user *base, if (!HAS_EXECLISTS(i915)) return -ENODEV; - if (intel_uc_uses_guc_submission(&i915->gt.uc)) - return -ENODEV; /* not implement yet */ - if (get_user(idx, &ext->engine_index)) return -EFAULT; @@ -495,6 +491,11 @@ set_proto_ctx_engines_bond(struct i915_user_extension __user *base, void *data) return -EINVAL; } + if (intel_engine_uses_guc(master)) { + DRM_DEBUG("bonding extension not supported with GuC submission"); + return -ENODEV; + } + if (get_user(num_bonds, &ext->num_bonds)) return -EFAULT; @@ -799,7 +800,8 @@ static int intel_context_set_gem(struct intel_context *ce, } if (ctx->sched.priority >= I915_PRIORITY_NORMAL && - intel_engine_has_timeslices(ce->engine)) + intel_engine_has_timeslices(ce->engine) && + intel_engine_has_semaphores(ce->engine)) __set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags); if (IS_ACTIVE(CONFIG_DRM_I915_REQUEST_TIMEOUT) && @@ -949,8 +951,8 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx, break; case I915_GEM_ENGINE_TYPE_BALANCED: - ce = intel_execlists_create_virtual(pe[n].siblings, - pe[n].num_siblings); + ce = intel_engine_create_virtual(pe[n].siblings, + pe[n].num_siblings); break; case I915_GEM_ENGINE_TYPE_INVALID: @@ -1082,7 +1084,7 @@ static void kill_engines(struct i915_gem_engines *engines, bool ban) for_each_gem_engine(ce, engines, it) { struct intel_engine_cs *engine; - if (ban && intel_context_set_banned(ce)) + if (ban && intel_context_ban(ce, NULL)) continue; /* @@ -1778,7 +1780,8 @@ static void __apply_priority(struct intel_context *ce, void *arg) if (!intel_engine_has_timeslices(ce->engine)) return; - if (ctx->sched.priority >= I915_PRIORITY_NORMAL) + if (ctx->sched.priority >= I915_PRIORITY_NORMAL && + intel_engine_has_semaphores(ce->engine)) intel_context_set_use_semaphores(ce); else intel_context_clear_use_semaphores(ce); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index a90f796e85c0..6fffd4d377c2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -645,7 +645,8 @@ mmap_offset_attach(struct drm_i915_gem_object *obj, goto insert; /* Attempt to reap some mmap space from dead objects */ - err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT); + err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT, + NULL); if (err) goto err; diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index 87b06572fd2e..f7aae502ec3d 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -506,7 +506,8 @@ gen8_emit_fini_breadcrumb_tail(struct i915_request *rq, u32 *cs) *cs++ = MI_USER_INTERRUPT; *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE; - if (intel_engine_has_semaphores(rq->engine)) + if (intel_engine_has_semaphores(rq->engine) && + !intel_uc_uses_guc_submission(&rq->engine->gt->uc)) cs = emit_preempt_busywait(rq, cs); rq->tail = intel_ring_offset(rq, cs); @@ -598,7 +599,8 @@ gen12_emit_fini_breadcrumb_tail(struct i915_request *rq, u32 *cs) *cs++ = MI_USER_INTERRUPT; *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE; - if (intel_engine_has_semaphores(rq->engine)) + if (intel_engine_has_semaphores(rq->engine) && + !intel_uc_uses_guc_submission(&rq->engine->gt->uc)) cs = gen12_emit_preempt_busywait(rq, cs); rq->tail = intel_ring_offset(rq, cs); diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c index 38cc42783dfb..209cf265bf74 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c @@ -15,28 +15,14 @@ #include "intel_gt_pm.h" #include "intel_gt_requests.h" -static bool irq_enable(struct intel_engine_cs *engine) +static bool irq_enable(struct intel_breadcrumbs *b) { - if (!engine->irq_enable) - return false; - - /* Caller disables interrupts */ - spin_lock(&engine->gt->irq_lock); - engine->irq_enable(engine); - spin_unlock(&engine->gt->irq_lock); - - return true; + return intel_engine_irq_enable(b->irq_engine); } -static void irq_disable(struct intel_engine_cs *engine) +static void irq_disable(struct intel_breadcrumbs *b) { - if (!engine->irq_disable) - return; - - /* Caller disables interrupts */ - spin_lock(&engine->gt->irq_lock); - engine->irq_disable(engine); - spin_unlock(&engine->gt->irq_lock); + intel_engine_irq_disable(b->irq_engine); } static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) @@ -57,7 +43,7 @@ static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) WRITE_ONCE(b->irq_armed, true); /* Requests may have completed before we could enable the interrupt. */ - if (!b->irq_enabled++ && irq_enable(b->irq_engine)) + if (!b->irq_enabled++ && b->irq_enable(b)) irq_work_queue(&b->irq_work); } @@ -76,7 +62,7 @@ static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b) { GEM_BUG_ON(!b->irq_enabled); if (!--b->irq_enabled) - irq_disable(b->irq_engine); + b->irq_disable(b); WRITE_ONCE(b->irq_armed, false); intel_gt_pm_put_async(b->irq_engine->gt); @@ -259,6 +245,9 @@ static void signal_irq_work(struct irq_work *work) llist_entry(signal, typeof(*rq), signal_node); struct list_head cb_list; + if (rq->engine->sched_engine->retire_inflight_request_prio) + rq->engine->sched_engine->retire_inflight_request_prio(rq); + spin_lock(&rq->lock); list_replace(&rq->fence.cb_list, &cb_list); __dma_fence_signal__timestamp(&rq->fence, timestamp); @@ -281,7 +270,7 @@ intel_breadcrumbs_create(struct intel_engine_cs *irq_engine) if (!b) return NULL; - b->irq_engine = irq_engine; + kref_init(&b->ref); spin_lock_init(&b->signalers_lock); INIT_LIST_HEAD(&b->signalers); @@ -290,6 +279,10 @@ intel_breadcrumbs_create(struct intel_engine_cs *irq_engine) spin_lock_init(&b->irq_lock); init_irq_work(&b->irq_work, signal_irq_work); + b->irq_engine = irq_engine; + b->irq_enable = irq_enable; + b->irq_disable = irq_disable; + return b; } @@ -303,9 +296,9 @@ void intel_breadcrumbs_reset(struct intel_breadcrumbs *b) spin_lock_irqsave(&b->irq_lock, flags); if (b->irq_enabled) - irq_enable(b->irq_engine); + b->irq_enable(b); else - irq_disable(b->irq_engine); + b->irq_disable(b); spin_unlock_irqrestore(&b->irq_lock, flags); } @@ -325,11 +318,14 @@ void __intel_breadcrumbs_park(struct intel_breadcrumbs *b) } } -void intel_breadcrumbs_free(struct intel_breadcrumbs *b) +void intel_breadcrumbs_free(struct kref *kref) { + struct intel_breadcrumbs *b = container_of(kref, typeof(*b), ref); + irq_work_sync(&b->irq_work); GEM_BUG_ON(!list_empty(&b->signalers)); GEM_BUG_ON(b->irq_armed); + kfree(b); } diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h index 3ce5ce270b04..be0d4f379a85 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h @@ -9,7 +9,7 @@ #include #include -#include "intel_engine_types.h" +#include "intel_breadcrumbs_types.h" struct drm_printer; struct i915_request; @@ -17,7 +17,7 @@ struct intel_breadcrumbs; struct intel_breadcrumbs * intel_breadcrumbs_create(struct intel_engine_cs *irq_engine); -void intel_breadcrumbs_free(struct intel_breadcrumbs *b); +void intel_breadcrumbs_free(struct kref *kref); void intel_breadcrumbs_reset(struct intel_breadcrumbs *b); void __intel_breadcrumbs_park(struct intel_breadcrumbs *b); @@ -48,4 +48,16 @@ void i915_request_cancel_breadcrumb(struct i915_request *request); void intel_context_remove_breadcrumbs(struct intel_context *ce, struct intel_breadcrumbs *b); +static inline struct intel_breadcrumbs * +intel_breadcrumbs_get(struct intel_breadcrumbs *b) +{ + kref_get(&b->ref); + return b; +} + +static inline void intel_breadcrumbs_put(struct intel_breadcrumbs *b) +{ + kref_put(&b->ref, intel_breadcrumbs_free); +} + #endif /* __INTEL_BREADCRUMBS__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h index 3a084ce8ff5e..72dfd3748c4c 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h @@ -7,10 +7,13 @@ #define __INTEL_BREADCRUMBS_TYPES__ #include +#include #include #include #include +#include "intel_engine_types.h" + /* * Rather than have every client wait upon all user interrupts, * with the herd waking after every interrupt and each doing the @@ -29,6 +32,7 @@ * the overhead of waking that client is much preferred. */ struct intel_breadcrumbs { + struct kref ref; atomic_t active; spinlock_t signalers_lock; /* protects the list of signalers */ @@ -42,7 +46,10 @@ struct intel_breadcrumbs { bool irq_armed; /* Not all breadcrumbs are attached to physical HW */ + intel_engine_mask_t engine_mask; struct intel_engine_cs *irq_engine; + bool (*irq_enable)(struct intel_breadcrumbs *b); + void (*irq_disable)(struct intel_breadcrumbs *b); }; #endif /* __INTEL_BREADCRUMBS_TYPES__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index bd63813c8a80..b1e3d00fb1f2 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -8,6 +8,7 @@ #include "i915_drv.h" #include "i915_globals.h" +#include "i915_trace.h" #include "intel_context.h" #include "intel_engine.h" @@ -28,6 +29,7 @@ static void rcu_context_free(struct rcu_head *rcu) { struct intel_context *ce = container_of(rcu, typeof(*ce), rcu); + trace_intel_context_free(ce); kmem_cache_free(global.slab_ce, ce); } @@ -46,6 +48,7 @@ intel_context_create(struct intel_engine_cs *engine) return ERR_PTR(-ENOMEM); intel_context_init(ce, engine); + trace_intel_context_create(ce); return ce; } @@ -80,7 +83,7 @@ static int intel_context_active_acquire(struct intel_context *ce) __i915_active_acquire(&ce->active); - if (intel_context_is_barrier(ce)) + if (intel_context_is_barrier(ce) || intel_engine_uses_guc(ce->engine)) return 0; /* Preallocate tracking nodes */ @@ -268,6 +271,8 @@ int __intel_context_do_pin_ww(struct intel_context *ce, GEM_BUG_ON(!intel_context_is_pinned(ce)); /* no overflow! */ + trace_intel_context_do_pin(ce); + err_unlock: mutex_unlock(&ce->pin_mutex); err_post_unpin: @@ -306,9 +311,9 @@ int __intel_context_do_pin(struct intel_context *ce) return err; } -void intel_context_unpin(struct intel_context *ce) +void __intel_context_do_unpin(struct intel_context *ce, int sub) { - if (!atomic_dec_and_test(&ce->pin_count)) + if (!atomic_sub_and_test(sub, &ce->pin_count)) return; CE_TRACE(ce, "unpin\n"); @@ -323,6 +328,7 @@ void intel_context_unpin(struct intel_context *ce) */ intel_context_get(ce); intel_context_active_release(ce); + trace_intel_context_do_unpin(ce); intel_context_put(ce); } @@ -360,6 +366,12 @@ static int __intel_context_active(struct i915_active *active) return 0; } +static int sw_fence_dummy_notify(struct i915_sw_fence *sf, + enum i915_sw_fence_notify state) +{ + return NOTIFY_DONE; +} + void intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine) { @@ -384,6 +396,18 @@ intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine) mutex_init(&ce->pin_mutex); + spin_lock_init(&ce->guc_state.lock); + INIT_LIST_HEAD(&ce->guc_state.fences); + + spin_lock_init(&ce->guc_active.lock); + INIT_LIST_HEAD(&ce->guc_active.requests); + + ce->guc_id = GUC_INVALID_LRC_ID; + INIT_LIST_HEAD(&ce->guc_id_link); + + i915_sw_fence_init(&ce->guc_blocked, sw_fence_dummy_notify); + i915_sw_fence_commit(&ce->guc_blocked); + i915_active_init(&ce->active, __intel_context_active, __intel_context_retire, 0); } @@ -500,6 +524,26 @@ struct i915_request *intel_context_create_request(struct intel_context *ce) return rq; } +struct i915_request *intel_context_find_active_request(struct intel_context *ce) +{ + struct i915_request *rq, *active = NULL; + unsigned long flags; + + GEM_BUG_ON(!intel_engine_uses_guc(ce->engine)); + + spin_lock_irqsave(&ce->guc_active.lock, flags); + list_for_each_entry_reverse(rq, &ce->guc_active.requests, + sched.link) { + if (i915_request_completed(rq)) + break; + + active = rq; + } + spin_unlock_irqrestore(&ce->guc_active.lock, flags); + + return active; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_context.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index b10cbe8fee99..876bdb08303c 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -16,6 +16,7 @@ #include "intel_engine_types.h" #include "intel_ring_types.h" #include "intel_timeline_types.h" +#include "i915_trace.h" #define CE_TRACE(ce, fmt, ...) do { \ const struct intel_context *ce__ = (ce); \ @@ -69,6 +70,13 @@ intel_context_is_pinned(struct intel_context *ce) return atomic_read(&ce->pin_count); } +static inline void intel_context_cancel_request(struct intel_context *ce, + struct i915_request *rq) +{ + GEM_BUG_ON(!ce->ops->cancel_request); + return ce->ops->cancel_request(ce, rq); +} + /** * intel_context_unlock_pinned - Releases the earlier locking of 'pinned' status * @ce - the context @@ -113,7 +121,32 @@ static inline void __intel_context_pin(struct intel_context *ce) atomic_inc(&ce->pin_count); } -void intel_context_unpin(struct intel_context *ce); +void __intel_context_do_unpin(struct intel_context *ce, int sub); + +static inline void intel_context_sched_disable_unpin(struct intel_context *ce) +{ + __intel_context_do_unpin(ce, 2); +} + +static inline void intel_context_unpin(struct intel_context *ce) +{ + if (!ce->ops->sched_disable) { + __intel_context_do_unpin(ce, 1); + } else { + /* + * Move ownership of this pin to the scheduling disable which is + * an async operation. When that operation completes the above + * intel_context_sched_disable_unpin is called potentially + * unpinning the context. + */ + while (!atomic_add_unless(&ce->pin_count, -1, 1)) { + if (atomic_cmpxchg(&ce->pin_count, 1, 2) == 1) { + ce->ops->sched_disable(ce); + break; + } + } + } +} void intel_context_enter_engine(struct intel_context *ce); void intel_context_exit_engine(struct intel_context *ce); @@ -175,6 +208,9 @@ int intel_context_prepare_remote_request(struct intel_context *ce, struct i915_request *intel_context_create_request(struct intel_context *ce); +struct i915_request * +intel_context_find_active_request(struct intel_context *ce); + static inline bool intel_context_is_barrier(const struct intel_context *ce) { return test_bit(CONTEXT_BARRIER_BIT, &ce->flags); @@ -215,6 +251,18 @@ static inline bool intel_context_set_banned(struct intel_context *ce) return test_and_set_bit(CONTEXT_BANNED, &ce->flags); } +static inline bool intel_context_ban(struct intel_context *ce, + struct i915_request *rq) +{ + bool ret = intel_context_set_banned(ce); + + trace_intel_context_ban(ce); + if (ce->ops->ban) + ce->ops->ban(ce, rq); + + return ret; +} + static inline bool intel_context_force_single_submission(const struct intel_context *ce) { diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 90026c177105..fe555551c2d2 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -13,12 +13,14 @@ #include #include "i915_active_types.h" +#include "i915_sw_fence.h" #include "i915_utils.h" #include "intel_engine_types.h" #include "intel_sseu.h" -#define CONTEXT_REDZONE POISON_INUSE +#include "uc/intel_guc_fwif.h" +#define CONTEXT_REDZONE POISON_INUSE DECLARE_EWMA(runtime, 3, 8); struct i915_gem_context; @@ -35,16 +37,29 @@ struct intel_context_ops { int (*alloc)(struct intel_context *ce); + void (*ban)(struct intel_context *ce, struct i915_request *rq); + int (*pre_pin)(struct intel_context *ce, struct i915_gem_ww_ctx *ww, void **vaddr); int (*pin)(struct intel_context *ce, void *vaddr); void (*unpin)(struct intel_context *ce); void (*post_unpin)(struct intel_context *ce); + void (*cancel_request)(struct intel_context *ce, + struct i915_request *rq); + void (*enter)(struct intel_context *ce); void (*exit)(struct intel_context *ce); + void (*sched_disable)(struct intel_context *ce); + void (*reset)(struct intel_context *ce); void (*destroy)(struct kref *kref); + + /* virtual engine/context interface */ + struct intel_context *(*create_virtual)(struct intel_engine_cs **engine, + unsigned int count); + struct intel_engine_cs *(*get_sibling)(struct intel_engine_cs *engine, + unsigned int sibling); }; struct intel_context { @@ -96,6 +111,7 @@ struct intel_context { #define CONTEXT_BANNED 6 #define CONTEXT_FORCE_SINGLE_SUBMISSION 7 #define CONTEXT_NOPREEMPT 8 +#define CONTEXT_LRCA_DIRTY 9 struct { u64 timeout_us; @@ -137,6 +153,51 @@ struct intel_context { struct intel_sseu sseu; u8 wa_bb_page; /* if set, page num reserved for context workarounds */ + + struct { + /** lock: protects everything in guc_state */ + spinlock_t lock; + /** + * sched_state: scheduling state of this context using GuC + * submission + */ + u8 sched_state; + /* + * fences: maintains of list of requests that have a submit + * fence related to GuC submission + */ + struct list_head fences; + } guc_state; + + struct { + /** lock: protects everything in guc_active */ + spinlock_t lock; + /** requests: active requests on this context */ + struct list_head requests; + } guc_active; + + /* GuC scheduling state flags that do not require a lock. */ + atomic_t guc_sched_state_no_lock; + + /* GuC LRC descriptor ID */ + u16 guc_id; + + /* GuC LRC descriptor reference count */ + atomic_t guc_id_ref; + + /* + * GuC ID link - in list when unpinned but guc_id still valid in GuC + */ + struct list_head guc_id_link; + + /* GuC context blocked fence */ + struct i915_sw_fence guc_blocked; + + /* + * GuC priority management + */ + u8 guc_prio; + u32 guc_prio_count[GUC_CLIENT_PRIORITY_NUM]; }; #endif /* __INTEL_CONTEXT_TYPES__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h index f911c1224ab2..04ce7a3e1d82 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine.h +++ b/drivers/gpu/drm/i915/gt/intel_engine.h @@ -212,6 +212,9 @@ void intel_engine_get_instdone(const struct intel_engine_cs *engine, void intel_engine_init_execlists(struct intel_engine_cs *engine); +bool intel_engine_irq_enable(struct intel_engine_cs *engine); +void intel_engine_irq_disable(struct intel_engine_cs *engine); + static inline void __intel_engine_reset(struct intel_engine_cs *engine, bool stalled) { @@ -237,12 +240,15 @@ __printf(3, 4) void intel_engine_dump(struct intel_engine_cs *engine, struct drm_printer *m, const char *header, ...); +void intel_engine_dump_active_requests(struct list_head *requests, + struct i915_request *hung_rq, + struct drm_printer *m); ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, ktime_t *now); struct i915_request * -intel_engine_find_active_request(struct intel_engine_cs *engine); +intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine); u32 intel_engine_context_size(struct intel_gt *gt, u8 class); struct intel_context * @@ -273,13 +279,60 @@ intel_engine_has_preempt_reset(const struct intel_engine_cs *engine) return intel_engine_has_preemption(engine); } +struct intel_context * +intel_engine_create_virtual(struct intel_engine_cs **siblings, + unsigned int count); + +static inline bool +intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine) +{ + /* + * For non-GuC submission we expect the back-end to look at the + * heartbeat status of the actual physical engine that the work + * has been (or is being) scheduled on, so we should only reach + * here with GuC submission enabled. + */ + GEM_BUG_ON(!intel_engine_uses_guc(engine)); + + return intel_guc_virtual_engine_has_heartbeat(engine); +} + static inline bool intel_engine_has_heartbeat(const struct intel_engine_cs *engine) { if (!IS_ACTIVE(CONFIG_DRM_I915_HEARTBEAT_INTERVAL)) return false; - return READ_ONCE(engine->props.heartbeat_interval_ms); + if (intel_engine_is_virtual(engine)) + return intel_virtual_engine_has_heartbeat(engine); + else + return READ_ONCE(engine->props.heartbeat_interval_ms); +} + +static inline struct intel_engine_cs * +intel_engine_get_sibling(struct intel_engine_cs *engine, unsigned int sibling) +{ + GEM_BUG_ON(!intel_engine_is_virtual(engine)); + return engine->cops->get_sibling(engine, sibling); +} + +static inline void +intel_engine_set_hung_context(struct intel_engine_cs *engine, + struct intel_context *ce) +{ + engine->hung_ce = ce; +} + +static inline void +intel_engine_clear_hung_context(struct intel_engine_cs *engine) +{ + intel_engine_set_hung_context(engine, NULL); +} + +static inline struct intel_context * +intel_engine_get_hung_context(struct intel_engine_cs *engine) +{ + return engine->hung_ce; } #endif /* _INTEL_RINGBUFFER_H_ */ diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index d561573ed98c..51a0d860d551 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -739,7 +739,7 @@ static int engine_setup_common(struct intel_engine_cs *engine) err_cmd_parser: i915_sched_engine_put(engine->sched_engine); err_sched_engine: - intel_breadcrumbs_free(engine->breadcrumbs); + intel_breadcrumbs_put(engine->breadcrumbs); err_status: cleanup_status_page(engine); return err; @@ -948,7 +948,7 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine) GEM_BUG_ON(!list_empty(&engine->sched_engine->requests)); i915_sched_engine_put(engine->sched_engine); - intel_breadcrumbs_free(engine->breadcrumbs); + intel_breadcrumbs_put(engine->breadcrumbs); intel_engine_fini_retire(engine); intel_engine_cleanup_cmd_parser(engine); @@ -1265,6 +1265,30 @@ bool intel_engines_are_idle(struct intel_gt *gt) return true; } +bool intel_engine_irq_enable(struct intel_engine_cs *engine) +{ + if (!engine->irq_enable) + return false; + + /* Caller disables interrupts */ + spin_lock(&engine->gt->irq_lock); + engine->irq_enable(engine); + spin_unlock(&engine->gt->irq_lock); + + return true; +} + +void intel_engine_irq_disable(struct intel_engine_cs *engine) +{ + if (!engine->irq_disable) + return; + + /* Caller disables interrupts */ + spin_lock(&engine->gt->irq_lock); + engine->irq_disable(engine); + spin_unlock(&engine->gt->irq_lock); +} + void intel_engines_reset_default_submission(struct intel_gt *gt) { struct intel_engine_cs *engine; @@ -1601,6 +1625,97 @@ static void print_properties(struct intel_engine_cs *engine, read_ul(&engine->defaults, p->offset)); } +static void engine_dump_request(struct i915_request *rq, struct drm_printer *m, const char *msg) +{ + struct intel_timeline *tl = get_timeline(rq); + + i915_request_show(m, rq, msg, 0); + + drm_printf(m, "\t\tring->start: 0x%08x\n", + i915_ggtt_offset(rq->ring->vma)); + drm_printf(m, "\t\tring->head: 0x%08x\n", + rq->ring->head); + drm_printf(m, "\t\tring->tail: 0x%08x\n", + rq->ring->tail); + drm_printf(m, "\t\tring->emit: 0x%08x\n", + rq->ring->emit); + drm_printf(m, "\t\tring->space: 0x%08x\n", + rq->ring->space); + + if (tl) { + drm_printf(m, "\t\tring->hwsp: 0x%08x\n", + tl->hwsp_offset); + intel_timeline_put(tl); + } + + print_request_ring(m, rq); + + if (rq->context->lrc_reg_state) { + drm_printf(m, "Logical Ring Context:\n"); + hexdump(m, rq->context->lrc_reg_state, PAGE_SIZE); + } +} + +void intel_engine_dump_active_requests(struct list_head *requests, + struct i915_request *hung_rq, + struct drm_printer *m) +{ + struct i915_request *rq; + const char *msg; + enum i915_request_state state; + + list_for_each_entry(rq, requests, sched.link) { + if (rq == hung_rq) + continue; + + state = i915_test_request_state(rq); + if (state < I915_REQUEST_QUEUED) + continue; + + if (state == I915_REQUEST_ACTIVE) + msg = "\t\tactive on engine"; + else + msg = "\t\tactive in queue"; + + engine_dump_request(rq, m, msg); + } +} + +static void engine_dump_active_requests(struct intel_engine_cs *engine, struct drm_printer *m) +{ + struct i915_request *hung_rq = NULL; + struct intel_context *ce; + bool guc; + + /* + * No need for an engine->irq_seqno_barrier() before the seqno reads. + * The GPU is still running so requests are still executing and any + * hardware reads will be out of date by the time they are reported. + * But the intention here is just to report an instantaneous snapshot + * so that's fine. + */ + lockdep_assert_held(&engine->sched_engine->lock); + + drm_printf(m, "\tRequests:\n"); + + guc = intel_uc_uses_guc_submission(&engine->gt->uc); + if (guc) { + ce = intel_engine_get_hung_context(engine); + if (ce) + hung_rq = intel_context_find_active_request(ce); + } else + hung_rq = intel_engine_execlist_find_hung_request(engine); + + if (hung_rq) + engine_dump_request(hung_rq, m, "\t\thung"); + + if (guc) + intel_guc_dump_active_requests(engine, hung_rq, m); + else + intel_engine_dump_active_requests(&engine->sched_engine->requests, + hung_rq, m); +} + void intel_engine_dump(struct intel_engine_cs *engine, struct drm_printer *m, const char *header, ...) @@ -1645,39 +1760,9 @@ void intel_engine_dump(struct intel_engine_cs *engine, i915_reset_count(error)); print_properties(engine, m); - drm_printf(m, "\tRequests:\n"); - spin_lock_irqsave(&engine->sched_engine->lock, flags); - rq = intel_engine_find_active_request(engine); - if (rq) { - struct intel_timeline *tl = get_timeline(rq); + engine_dump_active_requests(engine, m); - i915_request_show(m, rq, "\t\tactive ", 0); - - drm_printf(m, "\t\tring->start: 0x%08x\n", - i915_ggtt_offset(rq->ring->vma)); - drm_printf(m, "\t\tring->head: 0x%08x\n", - rq->ring->head); - drm_printf(m, "\t\tring->tail: 0x%08x\n", - rq->ring->tail); - drm_printf(m, "\t\tring->emit: 0x%08x\n", - rq->ring->emit); - drm_printf(m, "\t\tring->space: 0x%08x\n", - rq->ring->space); - - if (tl) { - drm_printf(m, "\t\tring->hwsp: 0x%08x\n", - tl->hwsp_offset); - intel_timeline_put(tl); - } - - print_request_ring(m, rq); - - if (rq->context->lrc_reg_state) { - drm_printf(m, "Logical Ring Context:\n"); - hexdump(m, rq->context->lrc_reg_state, PAGE_SIZE); - } - } drm_printf(m, "\tOn hold?: %lu\n", list_count(&engine->sched_engine->hold)); spin_unlock_irqrestore(&engine->sched_engine->lock, flags); @@ -1737,18 +1822,32 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, ktime_t *now) return total; } -static bool match_ring(struct i915_request *rq) +struct intel_context * +intel_engine_create_virtual(struct intel_engine_cs **siblings, + unsigned int count) { - u32 ring = ENGINE_READ(rq->engine, RING_START); + if (count == 0) + return ERR_PTR(-EINVAL); + + if (count == 1) + return intel_context_create(siblings[0]); - return ring == i915_ggtt_offset(rq->ring->vma); + GEM_BUG_ON(!siblings[0]->cops->create_virtual); + return siblings[0]->cops->create_virtual(siblings, count); } struct i915_request * -intel_engine_find_active_request(struct intel_engine_cs *engine) +intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine) { struct i915_request *request, *active = NULL; + /* + * This search does not work in GuC submission mode. However, the GuC + * will report the hanging context directly to the driver itself. So + * the driver should never get here when in GuC mode. + */ + GEM_BUG_ON(intel_uc_uses_guc_submission(&engine->gt->uc)); + /* * We are called by the error capture, reset and to dump engine * state at random points in time. In particular, note that neither is @@ -1780,14 +1879,7 @@ intel_engine_find_active_request(struct intel_engine_cs *engine) list_for_each_entry(request, &engine->sched_engine->requests, sched.link) { - if (__i915_request_is_complete(request)) - continue; - - if (!__i915_request_has_started(request)) - continue; - - /* More than one preemptible request may match! */ - if (!match_ring(request)) + if (i915_test_request_state(request) != I915_REQUEST_ACTIVE) continue; active = request; diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c index b6a305e6a974..f0768824de6f 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c @@ -70,12 +70,38 @@ static void show_heartbeat(const struct i915_request *rq, { struct drm_printer p = drm_debug_printer("heartbeat"); - intel_engine_dump(engine, &p, - "%s heartbeat {seqno:%llx:%lld, prio:%d} not ticking\n", - engine->name, - rq->fence.context, - rq->fence.seqno, - rq->sched.attr.priority); + if (!rq) { + intel_engine_dump(engine, &p, + "%s heartbeat not ticking\n", + engine->name); + } else { + intel_engine_dump(engine, &p, + "%s heartbeat {seqno:%llx:%lld, prio:%d} not ticking\n", + engine->name, + rq->fence.context, + rq->fence.seqno, + rq->sched.attr.priority); + } +} + +static void +reset_engine(struct intel_engine_cs *engine, struct i915_request *rq) +{ + if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) + show_heartbeat(rq, engine); + + if (intel_engine_uses_guc(engine)) + /* + * GuC itself is toast or GuC's hang detection + * is disabled. Either way, need to find the + * hang culprit manually. + */ + intel_guc_find_hung_context(engine); + + intel_gt_handle_error(engine->gt, engine->mask, + I915_ERROR_CAPTURE, + "stopped heartbeat on %s", + engine->name); } static void heartbeat(struct work_struct *wrk) @@ -102,6 +128,11 @@ static void heartbeat(struct work_struct *wrk) if (intel_gt_is_wedged(engine->gt)) goto out; + if (i915_sched_engine_disabled(engine->sched_engine)) { + reset_engine(engine, engine->heartbeat.systole); + goto out; + } + if (engine->heartbeat.systole) { long delay = READ_ONCE(engine->props.heartbeat_interval_ms); @@ -139,13 +170,7 @@ static void heartbeat(struct work_struct *wrk) engine->sched_engine->schedule(rq, &attr); local_bh_enable(); } else { - if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) - show_heartbeat(rq, engine); - - intel_gt_handle_error(engine->gt, engine->mask, - I915_ERROR_CAPTURE, - "stopped heartbeat on %s", - engine->name); + reset_engine(engine, rq); } rq->emitted_jiffies = jiffies; @@ -194,6 +219,26 @@ void intel_engine_park_heartbeat(struct intel_engine_cs *engine) i915_request_put(fetch_and_zero(&engine->heartbeat.systole)); } +void intel_gt_unpark_heartbeats(struct intel_gt *gt) +{ + struct intel_engine_cs *engine; + enum intel_engine_id id; + + for_each_engine(engine, gt, id) + if (intel_engine_pm_is_awake(engine)) + intel_engine_unpark_heartbeat(engine); + +} + +void intel_gt_park_heartbeats(struct intel_gt *gt) +{ + struct intel_engine_cs *engine; + enum intel_engine_id id; + + for_each_engine(engine, gt, id) + intel_engine_park_heartbeat(engine); +} + void intel_engine_init_heartbeat(struct intel_engine_cs *engine) { INIT_DELAYED_WORK(&engine->heartbeat.work, heartbeat); diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h index a488ea3e84a3..5da6d809a87a 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h @@ -7,6 +7,7 @@ #define INTEL_ENGINE_HEARTBEAT_H struct intel_engine_cs; +struct intel_gt; void intel_engine_init_heartbeat(struct intel_engine_cs *engine); @@ -16,6 +17,9 @@ int intel_engine_set_heartbeat(struct intel_engine_cs *engine, void intel_engine_park_heartbeat(struct intel_engine_cs *engine); void intel_engine_unpark_heartbeat(struct intel_engine_cs *engine); +void intel_gt_park_heartbeats(struct intel_gt *gt); +void intel_gt_unpark_heartbeats(struct intel_gt *gt); + int intel_engine_pulse(struct intel_engine_cs *engine); int intel_engine_flush_barriers(struct intel_engine_cs *engine); diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index 1cb9c3b70b29..eec57e57403f 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -21,7 +21,6 @@ #include "i915_pmu.h" #include "i915_priolist_types.h" #include "i915_selftest.h" -#include "intel_breadcrumbs_types.h" #include "intel_sseu.h" #include "intel_timeline_types.h" #include "intel_uncore.h" @@ -63,6 +62,7 @@ struct i915_sched_engine; struct intel_gt; struct intel_ring; struct intel_uncore; +struct intel_breadcrumbs; typedef u8 intel_engine_mask_t; #define ALL_ENGINES ((intel_engine_mask_t)~0ul) @@ -304,6 +304,8 @@ struct intel_engine_cs { /* keep a request in reserve for a [pm] barrier under oom */ struct i915_request *request_pool; + struct intel_context *hung_ce; + struct llist_head barrier_tasks; struct intel_context *kernel_context; /* pinned */ @@ -388,6 +390,8 @@ struct intel_engine_cs { void (*park)(struct intel_engine_cs *engine); void (*unpark)(struct intel_engine_cs *engine); + void (*bump_serial)(struct intel_engine_cs *engine); + void (*set_default_submission)(struct intel_engine_cs *engine); const struct intel_context_ops *cops; @@ -418,6 +422,12 @@ struct intel_engine_cs { void (*release)(struct intel_engine_cs *engine); + /* + * Add / remove request from engine active tracking + */ + void (*add_active_request)(struct i915_request *rq); + void (*remove_active_request)(struct i915_request *rq); + struct intel_engine_execlists execlists; /* @@ -439,6 +449,7 @@ struct intel_engine_cs { #define I915_ENGINE_IS_VIRTUAL BIT(5) #define I915_ENGINE_HAS_RELATIVE_MMIO BIT(6) #define I915_ENGINE_REQUIRES_CMD_PARSER BIT(7) +#define I915_ENGINE_WANT_FORCED_PREEMPTION BIT(8) unsigned int flags; /* diff --git a/drivers/gpu/drm/i915/gt/intel_engine_user.c b/drivers/gpu/drm/i915/gt/intel_engine_user.c index 84142127ebd8..8f8bea08e734 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_user.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_user.c @@ -11,6 +11,7 @@ #include "intel_engine.h" #include "intel_engine_user.h" #include "intel_gt.h" +#include "uc/intel_guc_submission.h" struct intel_engine_cs * intel_engine_lookup_user(struct drm_i915_private *i915, u8 class, u8 instance) @@ -115,6 +116,9 @@ static void set_scheduler_caps(struct drm_i915_private *i915) disabled |= (I915_SCHEDULER_CAP_ENABLED | I915_SCHEDULER_CAP_PRIORITY); + if (intel_uc_uses_guc_submission(&i915->gt.uc)) + enabled |= I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP; + for (i = 0; i < ARRAY_SIZE(map); i++) { if (engine->flags & BIT(map[i].engine)) enabled |= BIT(map[i].sched); diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 56e25090da67..2aa5cc100956 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -114,6 +114,7 @@ #include "gen8_engine_cs.h" #include "intel_breadcrumbs.h" #include "intel_context.h" +#include "intel_engine_heartbeat.h" #include "intel_engine_pm.h" #include "intel_engine_stats.h" #include "intel_execlists_submission.h" @@ -193,6 +194,9 @@ static struct virtual_engine *to_virtual_engine(struct intel_engine_cs *engine) return container_of(engine, struct virtual_engine, base); } +static struct intel_context * +execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count); + static struct i915_request * __active_request(const struct intel_timeline * const tl, struct i915_request *rq, @@ -2533,11 +2537,26 @@ static int execlists_context_alloc(struct intel_context *ce) return lrc_alloc(ce, ce->engine); } +static void execlists_context_cancel_request(struct intel_context *ce, + struct i915_request *rq) +{ + struct intel_engine_cs *engine = NULL; + + i915_request_active_engine(rq, &engine); + + if (engine && intel_engine_pulse(engine)) + intel_gt_handle_error(engine->gt, engine->mask, 0, + "request cancellation by %s", + current->comm); +} + static const struct intel_context_ops execlists_context_ops = { .flags = COPS_HAS_INFLIGHT, .alloc = execlists_context_alloc, + .cancel_request = execlists_context_cancel_request, + .pre_pin = execlists_context_pre_pin, .pin = execlists_context_pin, .unpin = lrc_unpin, @@ -2548,6 +2567,8 @@ static const struct intel_context_ops execlists_context_ops = { .reset = lrc_reset, .destroy = lrc_destroy, + + .create_virtual = execlists_create_virtual, }; static int emit_pdps(struct i915_request *rq) @@ -3101,6 +3122,42 @@ static void execlists_park(struct intel_engine_cs *engine) cancel_timer(&engine->execlists.preempt); } +static void add_to_engine(struct i915_request *rq) +{ + lockdep_assert_held(&rq->engine->sched_engine->lock); + list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests); +} + +static void remove_from_engine(struct i915_request *rq) +{ + struct intel_engine_cs *engine, *locked; + + /* + * Virtual engines complicate acquiring the engine timeline lock, + * as their rq->engine pointer is not stable until under that + * engine lock. The simple ploy we use is to take the lock then + * check that the rq still belongs to the newly locked engine. + */ + locked = READ_ONCE(rq->engine); + spin_lock_irq(&locked->sched_engine->lock); + while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) { + spin_unlock(&locked->sched_engine->lock); + spin_lock(&engine->sched_engine->lock); + locked = engine; + } + list_del_init(&rq->sched.link); + + clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); + clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags); + + /* Prevent further __await_execution() registering a cb, then flush */ + set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags); + + spin_unlock_irq(&locked->sched_engine->lock); + + i915_request_notify_execute_cb_imm(rq); +} + static bool can_preempt(struct intel_engine_cs *engine) { if (GRAPHICS_VER(engine->i915) > 8) @@ -3195,6 +3252,8 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine) engine->cops = &execlists_context_ops; engine->request_alloc = execlists_request_alloc; + engine->add_active_request = add_to_engine; + engine->remove_active_request = remove_from_engine; engine->reset.prepare = execlists_reset_prepare; engine->reset.rewind = execlists_reset_rewind; @@ -3396,7 +3455,7 @@ static void rcu_virtual_context_destroy(struct work_struct *wrk) intel_context_fini(&ve->context); if (ve->base.breadcrumbs) - intel_breadcrumbs_free(ve->base.breadcrumbs); + intel_breadcrumbs_put(ve->base.breadcrumbs); if (ve->base.sched_engine) i915_sched_engine_put(ve->base.sched_engine); intel_engine_free_request_pool(&ve->base); @@ -3493,11 +3552,24 @@ static void virtual_context_exit(struct intel_context *ce) intel_engine_pm_put(ve->siblings[n]); } +static struct intel_engine_cs * +virtual_get_sibling(struct intel_engine_cs *engine, unsigned int sibling) +{ + struct virtual_engine *ve = to_virtual_engine(engine); + + if (sibling >= ve->num_siblings) + return NULL; + + return ve->siblings[sibling]; +} + static const struct intel_context_ops virtual_context_ops = { .flags = COPS_HAS_INFLIGHT, .alloc = virtual_context_alloc, + .cancel_request = execlists_context_cancel_request, + .pre_pin = virtual_context_pre_pin, .pin = virtual_context_pin, .unpin = lrc_unpin, @@ -3507,6 +3579,8 @@ static const struct intel_context_ops virtual_context_ops = { .exit = virtual_context_exit, .destroy = virtual_context_destroy, + + .get_sibling = virtual_get_sibling, }; static intel_engine_mask_t virtual_submission_mask(struct virtual_engine *ve) @@ -3655,20 +3729,13 @@ static void virtual_submit_request(struct i915_request *rq) spin_unlock_irqrestore(&ve->base.sched_engine->lock, flags); } -struct intel_context * -intel_execlists_create_virtual(struct intel_engine_cs **siblings, - unsigned int count) +static struct intel_context * +execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count) { struct virtual_engine *ve; unsigned int n; int err; - if (count == 0) - return ERR_PTR(-EINVAL); - - if (count == 1) - return intel_context_create(siblings[0]); - ve = kzalloc(struct_size(ve, siblings, count), GFP_KERNEL); if (!ve) return ERR_PTR(-ENOMEM); @@ -3780,6 +3847,8 @@ intel_execlists_create_virtual(struct intel_engine_cs **siblings, "v%dx%d", ve->base.class, count); ve->base.context_size = sibling->context_size; + ve->base.add_active_request = sibling->add_active_request; + ve->base.remove_active_request = sibling->remove_active_request; ve->base.emit_bb_start = sibling->emit_bb_start; ve->base.emit_flush = sibling->emit_flush; ve->base.emit_init_breadcrumb = sibling->emit_init_breadcrumb; diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.h b/drivers/gpu/drm/i915/gt/intel_execlists_submission.h index ad4f3e1a0fde..a1aa92c983a5 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.h +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.h @@ -32,10 +32,6 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine, int indent), unsigned int max); -struct intel_context * -intel_execlists_create_virtual(struct intel_engine_cs **siblings, - unsigned int count); - bool intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine); diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index e714e21c0a4d..ceeb517ba259 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -585,6 +585,25 @@ static void __intel_gt_disable(struct intel_gt *gt) GEM_BUG_ON(intel_gt_pm_is_awake(gt)); } +int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout) +{ + long remaining_timeout; + + /* If the device is asleep, we have no requests outstanding */ + if (!intel_gt_pm_is_awake(gt)) + return 0; + + while ((timeout = intel_gt_retire_requests_timeout(gt, timeout, + &remaining_timeout)) > 0) { + cond_resched(); + if (signal_pending(current)) + return -EINTR; + } + + return timeout ? timeout : intel_uc_wait_for_idle(>->uc, + remaining_timeout); +} + int intel_gt_init(struct intel_gt *gt) { int err; @@ -635,6 +654,8 @@ int intel_gt_init(struct intel_gt *gt) if (err) goto err_gt; + intel_uc_init_late(>->uc); + err = i915_inject_probe_error(gt->i915, -EIO); if (err) goto err_gt; diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h index e7aabe0cc5bf..74e771871a9b 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.h +++ b/drivers/gpu/drm/i915/gt/intel_gt.h @@ -48,6 +48,8 @@ void intel_gt_driver_release(struct intel_gt *gt); void intel_gt_driver_late_release(struct intel_gt *gt); +int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout); + void intel_gt_check_and_clear_faults(struct intel_gt *gt); void intel_gt_clear_error_registers(struct intel_gt *gt, intel_engine_mask_t engine_mask); diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c index aef3084e8b16..463a6ae605a0 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c @@ -174,8 +174,6 @@ static void gt_sanitize(struct intel_gt *gt, bool force) if (intel_gt_is_wedged(gt)) intel_gt_unset_wedged(gt); - intel_uc_sanitize(>->uc); - for_each_engine(engine, gt, id) if (engine->reset.prepare) engine->reset.prepare(engine); @@ -191,6 +189,8 @@ static void gt_sanitize(struct intel_gt *gt, bool force) __intel_engine_reset(engine, false); } + intel_uc_reset(>->uc, false); + for_each_engine(engine, gt, id) if (engine->reset.finish) engine->reset.finish(engine); @@ -243,6 +243,8 @@ int intel_gt_resume(struct intel_gt *gt) goto err_wedged; } + intel_uc_reset_finish(>->uc); + intel_rps_enable(>->rps); intel_llc_enable(>->llc); diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c index 647eca9d867a..edb881d75630 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c @@ -130,7 +130,8 @@ void intel_engine_fini_retire(struct intel_engine_cs *engine) GEM_BUG_ON(engine->retire); } -long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout) +long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout, + long *remaining_timeout) { struct intel_gt_timelines *timelines = >->timelines; struct intel_timeline *tl, *tn; @@ -195,22 +196,10 @@ out_active: spin_lock(&timelines->lock); if (flush_submission(gt, timeout)) /* Wait, there's more! */ active_count++; - return active_count ? timeout : 0; -} - -int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout) -{ - /* If the device is asleep, we have no requests outstanding */ - if (!intel_gt_pm_is_awake(gt)) - return 0; - - while ((timeout = intel_gt_retire_requests_timeout(gt, timeout)) > 0) { - cond_resched(); - if (signal_pending(current)) - return -EINTR; - } + if (remaining_timeout) + *remaining_timeout = timeout; - return timeout; + return active_count ? timeout : 0; } static void retire_work_handler(struct work_struct *work) diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.h b/drivers/gpu/drm/i915/gt/intel_gt_requests.h index fcc30a6e4fe9..51dbe0e3294e 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.h @@ -6,14 +6,17 @@ #ifndef INTEL_GT_REQUESTS_H #define INTEL_GT_REQUESTS_H +#include + struct intel_engine_cs; struct intel_gt; struct intel_timeline; -long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout); +long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout, + long *remaining_timeout); static inline void intel_gt_retire_requests(struct intel_gt *gt) { - intel_gt_retire_requests_timeout(gt, 0); + intel_gt_retire_requests_timeout(gt, 0, NULL); } void intel_engine_init_retire(struct intel_engine_cs *engine); @@ -21,8 +24,6 @@ void intel_engine_add_retire(struct intel_engine_cs *engine, struct intel_timeline *tl); void intel_engine_fini_retire(struct intel_engine_cs *engine); -int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout); - void intel_gt_init_requests(struct intel_gt *gt); void intel_gt_park_requests(struct intel_gt *gt); void intel_gt_unpark_requests(struct intel_gt *gt); diff --git a/drivers/gpu/drm/i915/gt/intel_lrc_reg.h b/drivers/gpu/drm/i915/gt/intel_lrc_reg.h index 41e5350a7a05..49d4857ad9b7 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc_reg.h +++ b/drivers/gpu/drm/i915/gt/intel_lrc_reg.h @@ -87,7 +87,6 @@ #define GEN11_CSB_WRITE_PTR_MASK (GEN11_CSB_PTR_MASK << 0) #define MAX_CONTEXT_HW_ID (1 << 21) /* exclusive */ -#define MAX_GUC_CONTEXT_HW_ID (1 << 20) /* exclusive */ #define GEN11_MAX_CONTEXT_HW_ID (1 << 11) /* exclusive */ /* in Gen12 ID 0x7FF is reserved to indicate idle */ #define GEN12_MAX_CONTEXT_HW_ID (GEN11_MAX_CONTEXT_HW_ID - 1) diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c index 72251638d4ea..3ed694cab5af 100644 --- a/drivers/gpu/drm/i915/gt/intel_reset.c +++ b/drivers/gpu/drm/i915/gt/intel_reset.c @@ -22,7 +22,6 @@ #include "intel_reset.h" #include "uc/intel_guc.h" -#include "uc/intel_guc_submission.h" #define RESET_MAX_RETRIES 3 @@ -39,21 +38,6 @@ static void rmw_clear_fw(struct intel_uncore *uncore, i915_reg_t reg, u32 clr) intel_uncore_rmw_fw(uncore, reg, clr, 0); } -static void skip_context(struct i915_request *rq) -{ - struct intel_context *hung_ctx = rq->context; - - list_for_each_entry_from_rcu(rq, &hung_ctx->timeline->requests, link) { - if (!i915_request_is_active(rq)) - return; - - if (rq->context == hung_ctx) { - i915_request_set_error_once(rq, -EIO); - __i915_request_skip(rq); - } - } -} - static void client_mark_guilty(struct i915_gem_context *ctx, bool banned) { struct drm_i915_file_private *file_priv = ctx->file_priv; @@ -88,10 +72,8 @@ static bool mark_guilty(struct i915_request *rq) bool banned; int i; - if (intel_context_is_closed(rq->context)) { - intel_context_set_banned(rq->context); + if (intel_context_is_closed(rq->context)) return true; - } rcu_read_lock(); ctx = rcu_dereference(rq->context->gem_context); @@ -123,11 +105,9 @@ static bool mark_guilty(struct i915_request *rq) banned = !i915_gem_context_is_recoverable(ctx); if (time_before(jiffies, prev_hang + CONTEXT_FAST_HANG_JIFFIES)) banned = true; - if (banned) { + if (banned) drm_dbg(&ctx->i915->drm, "context %s: guilty %d, banned\n", ctx->name, atomic_read(&ctx->guilty_count)); - intel_context_set_banned(rq->context); - } client_mark_guilty(ctx, banned); @@ -149,6 +129,8 @@ static void mark_innocent(struct i915_request *rq) void __i915_request_reset(struct i915_request *rq, bool guilty) { + bool banned = false; + RQ_TRACE(rq, "guilty? %s\n", yesno(guilty)); GEM_BUG_ON(__i915_request_is_complete(rq)); @@ -156,13 +138,15 @@ void __i915_request_reset(struct i915_request *rq, bool guilty) if (guilty) { i915_request_set_error_once(rq, -EIO); __i915_request_skip(rq); - if (mark_guilty(rq)) - skip_context(rq); + banned = mark_guilty(rq); } else { i915_request_set_error_once(rq, -EAGAIN); mark_innocent(rq); } rcu_read_unlock(); + + if (banned) + intel_context_ban(rq->context, rq); } static bool i915_in_reset(struct pci_dev *pdev) @@ -826,6 +810,8 @@ static int gt_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask) __intel_engine_reset(engine, stalled_mask & engine->mask); local_bh_enable(); + intel_uc_reset(>->uc, true); + intel_ggtt_restore_fences(gt->ggtt); return err; @@ -850,6 +836,8 @@ static void reset_finish(struct intel_gt *gt, intel_engine_mask_t awake) if (awake & engine->mask) intel_engine_pm_put(engine); } + + intel_uc_reset_finish(>->uc); } static void nop_submit_request(struct i915_request *request) @@ -903,6 +891,7 @@ static void __intel_gt_set_wedged(struct intel_gt *gt) for_each_engine(engine, gt, id) if (engine->reset.cancel) engine->reset.cancel(engine); + intel_uc_cancel_requests(>->uc); local_bh_enable(); reset_finish(gt, awake); @@ -1191,6 +1180,9 @@ int __intel_engine_reset_bh(struct intel_engine_cs *engine, const char *msg) ENGINE_TRACE(engine, "flags=%lx\n", gt->reset.flags); GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, >->reset.flags)); + if (intel_engine_uses_guc(engine)) + return -ENODEV; + if (!intel_engine_pm_get_if_awake(engine)) return 0; @@ -1201,13 +1193,10 @@ int __intel_engine_reset_bh(struct intel_engine_cs *engine, const char *msg) "Resetting %s for %s\n", engine->name, msg); atomic_inc(&engine->i915->gpu_error.reset_engine_count[engine->uabi_class]); - if (intel_engine_uses_guc(engine)) - ret = intel_guc_reset_engine(&engine->gt->uc.guc, engine); - else - ret = intel_gt_reset_engine(engine); + ret = intel_gt_reset_engine(engine); if (ret) { /* If we fail here, we expect to fallback to a global reset */ - ENGINE_TRACE(engine, "Failed to reset, err: %d\n", ret); + ENGINE_TRACE(engine, "Failed to reset %s, err: %d\n", engine->name, ret); goto out; } @@ -1341,7 +1330,8 @@ void intel_gt_handle_error(struct intel_gt *gt, * Try engine reset when available. We fall back to full reset if * single reset fails. */ - if (intel_has_reset_engine(gt) && !intel_gt_is_wedged(gt)) { + if (!intel_uc_uses_guc_submission(>->uc) && + intel_has_reset_engine(gt) && !intel_gt_is_wedged(gt)) { local_bh_disable(); for_each_engine_masked(engine, gt, engine_mask, tmp) { BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE); diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 5c4d204d07cc..05bb9f449df1 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -586,9 +586,29 @@ static void ring_context_reset(struct intel_context *ce) clear_bit(CONTEXT_VALID_BIT, &ce->flags); } +static void ring_context_ban(struct intel_context *ce, + struct i915_request *rq) +{ + struct intel_engine_cs *engine; + + if (!rq || !i915_request_is_active(rq)) + return; + + engine = rq->engine; + lockdep_assert_held(&engine->sched_engine->lock); + list_for_each_entry_continue(rq, &engine->sched_engine->requests, + sched.link) + if (rq->context == ce) { + i915_request_set_error_once(rq, -EIO); + __i915_request_skip(rq); + } +} + static const struct intel_context_ops ring_context_ops = { .alloc = ring_context_alloc, + .ban = ring_context_ban, + .pre_pin = ring_context_pre_pin, .pin = ring_context_pin, .unpin = ring_context_unpin, @@ -1047,6 +1067,25 @@ static void setup_irq(struct intel_engine_cs *engine) } } +static void add_to_engine(struct i915_request *rq) +{ + lockdep_assert_held(&rq->engine->sched_engine->lock); + list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests); +} + +static void remove_from_engine(struct i915_request *rq) +{ + spin_lock_irq(&rq->engine->sched_engine->lock); + list_del_init(&rq->sched.link); + + /* Prevent further __await_execution() registering a cb, then flush */ + set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags); + + spin_unlock_irq(&rq->engine->sched_engine->lock); + + i915_request_notify_execute_cb_imm(rq); +} + static void setup_common(struct intel_engine_cs *engine) { struct drm_i915_private *i915 = engine->i915; @@ -1064,6 +1103,9 @@ static void setup_common(struct intel_engine_cs *engine) engine->reset.cancel = reset_cancel; engine->reset.finish = reset_finish; + engine->add_active_request = add_to_engine; + engine->remove_active_request = remove_from_engine; + engine->cops = &ring_context_ops; engine->request_alloc = ring_request_alloc; diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c index 06e9a8ed4e03..0c8e7f2b06f0 100644 --- a/drivers/gpu/drm/i915/gt/intel_rps.c +++ b/drivers/gpu/drm/i915/gt/intel_rps.c @@ -1877,6 +1877,10 @@ void intel_rps_init(struct intel_rps *rps) if (GRAPHICS_VER(i915) >= 8 && GRAPHICS_VER(i915) < 11) rps->pm_intrmsk_mbz |= GEN8_PMINTR_DISABLE_REDIRECT_TO_GUC; + + /* GuC needs ARAT expired interrupt unmasked */ + if (intel_uc_uses_guc_submission(&rps_to_gt(rps)->uc)) + rps->pm_intrmsk_mbz |= ARAT_EXPIRED_INTRMSK; } void intel_rps_sanitize(struct intel_rps *rps) diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index 7731db33c46a..2fd32dc0b521 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -150,13 +150,14 @@ static void _wa_add(struct i915_wa_list *wal, const struct i915_wa *wa) } static void wa_add(struct i915_wa_list *wal, i915_reg_t reg, - u32 clear, u32 set, u32 read_mask) + u32 clear, u32 set, u32 read_mask, bool masked_reg) { struct i915_wa wa = { .reg = reg, .clr = clear, .set = set, .read = read_mask, + .masked_reg = masked_reg, }; _wa_add(wal, &wa); @@ -165,7 +166,7 @@ static void wa_add(struct i915_wa_list *wal, i915_reg_t reg, static void wa_write_clr_set(struct i915_wa_list *wal, i915_reg_t reg, u32 clear, u32 set) { - wa_add(wal, reg, clear, set, clear); + wa_add(wal, reg, clear, set, clear, false); } static void @@ -200,20 +201,20 @@ wa_write_clr(struct i915_wa_list *wal, i915_reg_t reg, u32 clr) static void wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val) { - wa_add(wal, reg, 0, _MASKED_BIT_ENABLE(val), val); + wa_add(wal, reg, 0, _MASKED_BIT_ENABLE(val), val, true); } static void wa_masked_dis(struct i915_wa_list *wal, i915_reg_t reg, u32 val) { - wa_add(wal, reg, 0, _MASKED_BIT_DISABLE(val), val); + wa_add(wal, reg, 0, _MASKED_BIT_DISABLE(val), val, true); } static void wa_masked_field_set(struct i915_wa_list *wal, i915_reg_t reg, u32 mask, u32 val) { - wa_add(wal, reg, 0, _MASKED_FIELD(mask, val), mask); + wa_add(wal, reg, 0, _MASKED_FIELD(mask, val), mask, true); } static void gen6_ctx_workarounds_init(struct intel_engine_cs *engine, @@ -533,10 +534,10 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine, wa_masked_en(wal, ICL_HDC_MODE, HDC_FORCE_NON_COHERENT); /* WaEnableFloatBlendOptimization:icl */ - wa_write_clr_set(wal, - GEN10_CACHE_MODE_SS, - 0, /* write-only, so skip validation */ - _MASKED_BIT_ENABLE(FLOAT_BLEND_OPTIMIZATION_ENABLE)); + wa_add(wal, GEN10_CACHE_MODE_SS, 0, + _MASKED_BIT_ENABLE(FLOAT_BLEND_OPTIMIZATION_ENABLE), + 0 /* write-only, so skip validation */, + true); /* WaDisableGPGPUMidThreadPreemption:icl */ wa_masked_field_set(wal, GEN8_CS_CHICKEN1, @@ -581,7 +582,7 @@ static void gen12_ctx_gt_tuning_init(struct intel_engine_cs *engine, FF_MODE2, FF_MODE2_TDS_TIMER_MASK, FF_MODE2_TDS_TIMER_128, - 0); + 0, false); } static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine, @@ -619,7 +620,7 @@ static void gen12_ctx_workarounds_init(struct intel_engine_cs *engine, FF_MODE2, FF_MODE2_GS_TIMER_MASK, FF_MODE2_GS_TIMER_224, - 0); + 0, false); /* * Wa_14012131227:dg1 @@ -795,7 +796,7 @@ hsw_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal) wa_add(wal, HSW_ROW_CHICKEN3, 0, _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE), - 0 /* XXX does this reg exist? */); + 0 /* XXX does this reg exist? */, true); /* WaVSRefCountFullforceMissDisable:hsw */ wa_write_clr(wal, GEN7_FF_THREAD_MODE, GEN7_FF_VS_REF_CNT_FFME); @@ -1843,10 +1844,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) * disable bit, which we don't touch here, but it's good * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM). */ - wa_add(wal, GEN7_GT_MODE, 0, - _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, - GEN6_WIZ_HASHING_16x4), - GEN6_WIZ_HASHING_16x4); + wa_masked_field_set(wal, + GEN7_GT_MODE, + GEN6_WIZ_HASHING_MASK, + GEN6_WIZ_HASHING_16x4); } if (IS_GRAPHICS_VER(i915, 6, 7)) @@ -1896,10 +1897,10 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) * disable bit, which we don't touch here, but it's good * to keep in mind (see 3DSTATE_PS and 3DSTATE_WM). */ - wa_add(wal, - GEN6_GT_MODE, 0, - _MASKED_FIELD(GEN6_WIZ_HASHING_MASK, GEN6_WIZ_HASHING_16x4), - GEN6_WIZ_HASHING_16x4); + wa_masked_field_set(wal, + GEN6_GT_MODE, + GEN6_WIZ_HASHING_MASK, + GEN6_WIZ_HASHING_16x4); /* WaDisable_RenderCache_OperationalFlush:snb */ wa_masked_dis(wal, CACHE_MODE_0, RC_OP_FLUSH_ENABLE); @@ -1920,7 +1921,7 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) wa_add(wal, MI_MODE, 0, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH), /* XXX bit doesn't stick on Broadwater */ - IS_I965G(i915) ? 0 : VS_TIMER_DISPATCH); + IS_I965G(i915) ? 0 : VS_TIMER_DISPATCH, true); if (GRAPHICS_VER(i915) == 4) /* @@ -1935,7 +1936,8 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) */ wa_add(wal, ECOSKPD, 0, _MASKED_BIT_ENABLE(ECO_CONSTANT_BUFFER_SR_DISABLE), - 0 /* XXX bit doesn't stick on Broadwater */); + 0 /* XXX bit doesn't stick on Broadwater */, + true); } static void diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds_types.h b/drivers/gpu/drm/i915/gt/intel_workarounds_types.h index c214111ea367..1e873681795d 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds_types.h +++ b/drivers/gpu/drm/i915/gt/intel_workarounds_types.h @@ -15,6 +15,7 @@ struct i915_wa { u32 clr; u32 set; u32 read; + bool masked_reg; }; struct i915_wa_list { diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c index 68970398e4ef..2c1af030310c 100644 --- a/drivers/gpu/drm/i915/gt/mock_engine.c +++ b/drivers/gpu/drm/i915/gt/mock_engine.c @@ -235,6 +235,34 @@ static void mock_submit_request(struct i915_request *request) spin_unlock_irqrestore(&engine->hw_lock, flags); } +static void mock_add_to_engine(struct i915_request *rq) +{ + lockdep_assert_held(&rq->engine->sched_engine->lock); + list_move_tail(&rq->sched.link, &rq->engine->sched_engine->requests); +} + +static void mock_remove_from_engine(struct i915_request *rq) +{ + struct intel_engine_cs *engine, *locked; + + /* + * Virtual engines complicate acquiring the engine timeline lock, + * as their rq->engine pointer is not stable until under that + * engine lock. The simple ploy we use is to take the lock then + * check that the rq still belongs to the newly locked engine. + */ + + locked = READ_ONCE(rq->engine); + spin_lock_irq(&locked->sched_engine->lock); + while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) { + spin_unlock(&locked->sched_engine->lock); + spin_lock(&engine->sched_engine->lock); + locked = engine; + } + list_del_init(&rq->sched.link); + spin_unlock_irq(&locked->sched_engine->lock); +} + static void mock_reset_prepare(struct intel_engine_cs *engine) { } @@ -284,7 +312,7 @@ static void mock_engine_release(struct intel_engine_cs *engine) GEM_BUG_ON(timer_pending(&mock->hw_delay)); i915_sched_engine_put(engine->sched_engine); - intel_breadcrumbs_free(engine->breadcrumbs); + intel_breadcrumbs_put(engine->breadcrumbs); intel_context_unpin(engine->kernel_context); intel_context_put(engine->kernel_context); @@ -321,6 +349,8 @@ struct intel_engine_cs *mock_engine(struct drm_i915_private *i915, engine->base.emit_flush = mock_emit_flush; engine->base.emit_fini_breadcrumb = mock_emit_breadcrumb; engine->base.submit_request = mock_submit_request; + engine->base.add_active_request = mock_add_to_engine; + engine->base.remove_active_request = mock_remove_from_engine; engine->base.reset.prepare = mock_reset_prepare; engine->base.reset.rewind = mock_reset_rewind; @@ -370,7 +400,7 @@ int mock_engine_init(struct intel_engine_cs *engine) return 0; err_breadcrumbs: - intel_breadcrumbs_free(engine->breadcrumbs); + intel_breadcrumbs_put(engine->breadcrumbs); err_schedule: i915_sched_engine_put(engine->sched_engine); return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c index 26685b927169..fa7b99a671dd 100644 --- a/drivers/gpu/drm/i915/gt/selftest_context.c +++ b/drivers/gpu/drm/i915/gt/selftest_context.c @@ -209,7 +209,13 @@ static int __live_active_context(struct intel_engine_cs *engine) * This test makes sure that the context is kept alive until a * subsequent idle-barrier (emitted when the engine wakeref hits 0 * with no more outstanding requests). + * + * In GuC submission mode we don't use idle barriers and we instead + * get a message from the GuC to signal that it is safe to unpin the + * context from memory. */ + if (intel_engine_uses_guc(engine)) + return 0; if (intel_engine_pm_is_awake(engine)) { pr_err("%s is awake before starting %s!\n", @@ -357,7 +363,11 @@ static int __live_remote_context(struct intel_engine_cs *engine) * on the context image remotely (intel_context_prepare_remote_request), * which inserts foreign fences into intel_context.active, does not * clobber the idle-barrier. + * + * In GuC submission mode we don't use idle barriers. */ + if (intel_engine_uses_guc(engine)) + return 0; if (intel_engine_pm_is_awake(engine)) { pr_err("%s is awake before starting %s!\n", diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c index 4896e4ccad50..317eebf086c3 100644 --- a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c +++ b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.c @@ -405,3 +405,25 @@ void st_engine_heartbeat_enable(struct intel_engine_cs *engine) engine->props.heartbeat_interval_ms = engine->defaults.heartbeat_interval_ms; } + +void st_engine_heartbeat_disable_no_pm(struct intel_engine_cs *engine) +{ + engine->props.heartbeat_interval_ms = 0; + + /* + * Park the heartbeat but without holding the PM lock as that + * makes the engines appear not-idle. Note that if/when unpark + * is called due to the PM lock being acquired later the + * heartbeat still won't be enabled because of the above = 0. + */ + if (intel_engine_pm_get_if_awake(engine)) { + intel_engine_park_heartbeat(engine); + intel_engine_pm_put(engine); + } +} + +void st_engine_heartbeat_enable_no_pm(struct intel_engine_cs *engine) +{ + engine->props.heartbeat_interval_ms = + engine->defaults.heartbeat_interval_ms; +} diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h index cd27113d5400..81da2cd8e406 100644 --- a/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h +++ b/drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h @@ -9,6 +9,8 @@ struct intel_engine_cs; void st_engine_heartbeat_disable(struct intel_engine_cs *engine); +void st_engine_heartbeat_disable_no_pm(struct intel_engine_cs *engine); void st_engine_heartbeat_enable(struct intel_engine_cs *engine); +void st_engine_heartbeat_enable_no_pm(struct intel_engine_cs *engine); #endif /* SELFTEST_ENGINE_HEARTBEAT_H */ diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c index 73ddc6e14730..59cf8afc6d6f 100644 --- a/drivers/gpu/drm/i915/gt/selftest_execlists.c +++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c @@ -3727,7 +3727,7 @@ static int nop_virtual_engine(struct intel_gt *gt, GEM_BUG_ON(!nctx || nctx > ARRAY_SIZE(ve)); for (n = 0; n < nctx; n++) { - ve[n] = intel_execlists_create_virtual(siblings, nsibling); + ve[n] = intel_engine_create_virtual(siblings, nsibling); if (IS_ERR(ve[n])) { err = PTR_ERR(ve[n]); nctx = n; @@ -3923,7 +3923,7 @@ static int mask_virtual_engine(struct intel_gt *gt, * restrict it to our desired engine within the virtual engine. */ - ve = intel_execlists_create_virtual(siblings, nsibling); + ve = intel_engine_create_virtual(siblings, nsibling); if (IS_ERR(ve)) { err = PTR_ERR(ve); goto out_close; @@ -4054,7 +4054,7 @@ static int slicein_virtual_engine(struct intel_gt *gt, i915_request_add(rq); } - ce = intel_execlists_create_virtual(siblings, nsibling); + ce = intel_engine_create_virtual(siblings, nsibling); if (IS_ERR(ce)) { err = PTR_ERR(ce); goto out; @@ -4106,7 +4106,7 @@ static int sliceout_virtual_engine(struct intel_gt *gt, /* XXX We do not handle oversubscription and fairness with normal rq */ for (n = 0; n < nsibling; n++) { - ce = intel_execlists_create_virtual(siblings, nsibling); + ce = intel_engine_create_virtual(siblings, nsibling); if (IS_ERR(ce)) { err = PTR_ERR(ce); goto out; @@ -4208,7 +4208,7 @@ static int preserved_virtual_engine(struct intel_gt *gt, if (err) goto out_scratch; - ve = intel_execlists_create_virtual(siblings, nsibling); + ve = intel_engine_create_virtual(siblings, nsibling); if (IS_ERR(ve)) { err = PTR_ERR(ve); goto out_scratch; @@ -4348,7 +4348,7 @@ static int reset_virtual_engine(struct intel_gt *gt, if (igt_spinner_init(&spin, gt)) return -ENOMEM; - ve = intel_execlists_create_virtual(siblings, nsibling); + ve = intel_engine_create_virtual(siblings, nsibling); if (IS_ERR(ve)) { err = PTR_ERR(ve); goto out_spin; diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index 7aea10aa1fb4..a93a9b0d258e 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -17,6 +17,8 @@ #include "selftests/igt_flush_test.h" #include "selftests/igt_reset.h" #include "selftests/igt_atomic.h" +#include "selftests/igt_spinner.h" +#include "selftests/intel_scheduler_helpers.h" #include "selftests/mock_drm.h" @@ -378,6 +380,7 @@ static int igt_reset_nop(void *arg) ce = intel_context_create(engine); if (IS_ERR(ce)) { err = PTR_ERR(ce); + pr_err("[%s] Create context failed: %d!\n", engine->name, err); break; } @@ -387,6 +390,7 @@ static int igt_reset_nop(void *arg) rq = intel_context_create_request(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); + pr_err("[%s] Create request failed: %d!\n", engine->name, err); break; } @@ -401,24 +405,31 @@ static int igt_reset_nop(void *arg) igt_global_reset_unlock(gt); if (intel_gt_is_wedged(gt)) { + pr_err("[%s] GT is wedged!\n", engine->name); err = -EIO; break; } if (i915_reset_count(global) != reset_count + ++count) { - pr_err("Full GPU reset not recorded!\n"); + pr_err("[%s] Reset not recorded: %d vs %d + %d!\n", + engine->name, i915_reset_count(global), reset_count, count); err = -EINVAL; break; } err = igt_flush_test(gt->i915); - if (err) + if (err) { + pr_err("[%s] Flush failed: %d!\n", engine->name, err); break; + } } while (time_before(jiffies, end_time)); pr_info("%s: %d resets\n", __func__, count); - if (igt_flush_test(gt->i915)) + if (igt_flush_test(gt->i915)) { + pr_err("Post flush failed: %d!\n", err); err = -EIO; + } + return err; } @@ -440,9 +451,19 @@ static int igt_reset_nop_engine(void *arg) IGT_TIMEOUT(end_time); int err; + if (intel_engine_uses_guc(engine)) { + /* Engine level resets are triggered by GuC when a hang + * is detected. They can't be triggered by the KMD any + * more. Thus a nop batch cannot be used as a reset test + */ + continue; + } + ce = intel_context_create(engine); - if (IS_ERR(ce)) + if (IS_ERR(ce)) { + pr_err("[%s] Create context failed: %d!\n", engine->name, err); return PTR_ERR(ce); + } reset_count = i915_reset_count(global); reset_engine_count = i915_reset_engine_count(global, engine); @@ -549,9 +570,15 @@ static int igt_reset_fail_engine(void *arg) IGT_TIMEOUT(end_time); int err; + /* Can't manually break the reset if i915 doesn't perform it */ + if (intel_engine_uses_guc(engine)) + continue; + ce = intel_context_create(engine); - if (IS_ERR(ce)) + if (IS_ERR(ce)) { + pr_err("[%s] Create context failed: %d!\n", engine->name, err); return PTR_ERR(ce); + } st_engine_heartbeat_disable(engine); set_bit(I915_RESET_ENGINE + id, >->reset.flags); @@ -686,8 +713,12 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) for_each_engine(engine, gt, id) { unsigned int reset_count, reset_engine_count; unsigned long count; + bool using_guc = intel_engine_uses_guc(engine); IGT_TIMEOUT(end_time); + if (using_guc && !active) + continue; + if (active && !intel_engine_can_store_dword(engine)) continue; @@ -705,13 +736,23 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) set_bit(I915_RESET_ENGINE + id, >->reset.flags); count = 0; do { - if (active) { - struct i915_request *rq; + struct i915_request *rq = NULL; + struct intel_selftest_saved_policy saved; + int err2; + + err = intel_selftest_modify_policy(engine, &saved, + SELFTEST_SCHEDULER_MODIFY_FAST_RESET); + if (err) { + pr_err("[%s] Modify policy failed: %d!\n", engine->name, err); + break; + } + if (active) { rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); - break; + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); + goto restore; } i915_request_get(rq); @@ -727,34 +768,58 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) i915_request_put(rq); err = -EIO; - break; + goto restore; } + } - i915_request_put(rq); + if (!using_guc) { + err = intel_engine_reset(engine, NULL); + if (err) { + pr_err("intel_engine_reset(%s) failed, err:%d\n", + engine->name, err); + goto skip; + } } - err = intel_engine_reset(engine, NULL); - if (err) { - pr_err("intel_engine_reset(%s) failed, err:%d\n", - engine->name, err); - break; + if (rq) { + /* Ensure the reset happens and kills the engine */ + err = intel_selftest_wait_for_rq(rq); + if (err) + pr_err("[%s] Wait for request %lld:%lld [0x%04X] failed: %d!\n", + engine->name, rq->fence.context, rq->fence.seqno, rq->context->guc_id, err); } +skip: + if (rq) + i915_request_put(rq); + if (i915_reset_count(global) != reset_count) { pr_err("Full GPU reset recorded! (engine reset expected)\n"); err = -EINVAL; - break; + goto restore; } - if (i915_reset_engine_count(global, engine) != - ++reset_engine_count) { - pr_err("%s engine reset not recorded!\n", - engine->name); - err = -EINVAL; - break; + /* GuC based resets are not logged per engine */ + if (!using_guc) { + if (i915_reset_engine_count(global, engine) != + ++reset_engine_count) { + pr_err("%s engine reset not recorded!\n", + engine->name); + err = -EINVAL; + goto restore; + } } count++; + +restore: + err2 = intel_selftest_restore_policy(engine, &saved); + if (err2) + pr_err("[%s] Restore policy failed: %d!\n", engine->name, err); + if (err == 0) + err = err2; + if (err) + break; } while (time_before(jiffies, end_time)); clear_bit(I915_RESET_ENGINE + id, >->reset.flags); st_engine_heartbeat_enable(engine); @@ -765,12 +830,16 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) break; err = igt_flush_test(gt->i915); - if (err) + if (err) { + pr_err("[%s] Flush failed: %d!\n", engine->name, err); break; + } } - if (intel_gt_is_wedged(gt)) + if (intel_gt_is_wedged(gt)) { + pr_err("GT is wedged!\n"); err = -EIO; + } if (active) hang_fini(&h); @@ -807,7 +876,7 @@ static int active_request_put(struct i915_request *rq) if (!rq) return 0; - if (i915_request_wait(rq, 0, 5 * HZ) < 0) { + if (i915_request_wait(rq, 0, 10 * HZ) < 0) { GEM_TRACE("%s timed out waiting for completion of fence %llx:%lld\n", rq->engine->name, rq->fence.context, @@ -837,6 +906,7 @@ static int active_engine(void *data) ce[count] = intel_context_create(engine); if (IS_ERR(ce[count])) { err = PTR_ERR(ce[count]); + pr_err("[%s] Create context #%ld failed: %d!\n", engine->name, count, err); while (--count) intel_context_put(ce[count]); return err; @@ -852,6 +922,7 @@ static int active_engine(void *data) new = intel_context_create_request(ce[idx]); if (IS_ERR(new)) { err = PTR_ERR(new); + pr_err("[%s] Create request #%d failed: %d!\n", engine->name, idx, err); break; } @@ -867,8 +938,10 @@ static int active_engine(void *data) } err = active_request_put(old); - if (err) + if (err) { + pr_err("[%s] Request put failed: %d!\n", engine->name, err); break; + } cond_resched(); } @@ -876,6 +949,9 @@ static int active_engine(void *data) for (count = 0; count < ARRAY_SIZE(rq); count++) { int err__ = active_request_put(rq[count]); + if (err) + pr_err("[%s] Request put #%ld failed: %d!\n", engine->name, count, err); + /* Keep the first error */ if (!err) err = err__; @@ -916,10 +992,13 @@ static int __igt_reset_engines(struct intel_gt *gt, struct active_engine threads[I915_NUM_ENGINES] = {}; unsigned long device = i915_reset_count(global); unsigned long count = 0, reported; + bool using_guc = intel_engine_uses_guc(engine); IGT_TIMEOUT(end_time); - if (flags & TEST_ACTIVE && - !intel_engine_can_store_dword(engine)) + if (flags & TEST_ACTIVE) { + if (!intel_engine_can_store_dword(engine)) + continue; + } else if (using_guc) continue; if (!wait_for_idle(engine)) { @@ -949,6 +1028,7 @@ static int __igt_reset_engines(struct intel_gt *gt, "igt/%s", other->name); if (IS_ERR(tsk)) { err = PTR_ERR(tsk); + pr_err("[%s] Thread spawn failed: %d!\n", engine->name, err); goto unwind; } @@ -958,16 +1038,26 @@ static int __igt_reset_engines(struct intel_gt *gt, yield(); /* start all threads before we begin */ - st_engine_heartbeat_disable(engine); + st_engine_heartbeat_disable_no_pm(engine); set_bit(I915_RESET_ENGINE + id, >->reset.flags); do { struct i915_request *rq = NULL; + struct intel_selftest_saved_policy saved; + int err2; + + err = intel_selftest_modify_policy(engine, &saved, + SELFTEST_SCHEDULER_MODIFY_FAST_RESET); + if (err) { + pr_err("[%s] Modify policy failed: %d!\n", engine->name, err); + break; + } if (flags & TEST_ACTIVE) { rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); - break; + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); + goto restore; } i915_request_get(rq); @@ -983,15 +1073,27 @@ static int __igt_reset_engines(struct intel_gt *gt, i915_request_put(rq); err = -EIO; - break; + goto restore; } + } else { + intel_engine_pm_get(engine); } - err = intel_engine_reset(engine, NULL); - if (err) { - pr_err("i915_reset_engine(%s:%s): failed, err=%d\n", - engine->name, test_name, err); - break; + if (!using_guc) { + err = intel_engine_reset(engine, NULL); + if (err) { + pr_err("i915_reset_engine(%s:%s): failed, err=%d\n", + engine->name, test_name, err); + goto restore; + } + } + + if (rq) { + /* Ensure the reset happens and kills the engine */ + err = intel_selftest_wait_for_rq(rq); + if (err) + pr_err("[%s] Wait for request %lld:%lld [0x%04X] failed: %d!\n", + engine->name, rq->fence.context, rq->fence.seqno, rq->context->guc_id, err); } count++; @@ -999,16 +1101,16 @@ static int __igt_reset_engines(struct intel_gt *gt, if (rq) { if (rq->fence.error != -EIO) { pr_err("i915_reset_engine(%s:%s):" - " failed to reset request %llx:%lld\n", + " failed to reset request %lld:%lld [0x%04X]\n", engine->name, test_name, rq->fence.context, - rq->fence.seqno); + rq->fence.seqno, rq->context->guc_id); i915_request_put(rq); GEM_TRACE_DUMP(); intel_gt_set_wedged(gt); err = -EIO; - break; + goto restore; } if (i915_request_wait(rq, 0, HZ / 5) < 0) { @@ -1027,12 +1129,15 @@ static int __igt_reset_engines(struct intel_gt *gt, GEM_TRACE_DUMP(); intel_gt_set_wedged(gt); err = -EIO; - break; + goto restore; } i915_request_put(rq); } + if (!(flags & TEST_ACTIVE)) + intel_engine_pm_put(engine); + if (!(flags & TEST_SELF) && !wait_for_idle(engine)) { struct drm_printer p = drm_info_printer(gt->i915->drm.dev); @@ -1044,22 +1149,34 @@ static int __igt_reset_engines(struct intel_gt *gt, "%s\n", engine->name); err = -EIO; - break; + goto restore; } + +restore: + err2 = intel_selftest_restore_policy(engine, &saved); + if (err2) + pr_err("[%s] Restore policy failed: %d!\n", engine->name, err2); + if (err == 0) + err = err2; + if (err) + break; } while (time_before(jiffies, end_time)); clear_bit(I915_RESET_ENGINE + id, >->reset.flags); - st_engine_heartbeat_enable(engine); + st_engine_heartbeat_enable_no_pm(engine); pr_info("i915_reset_engine(%s:%s): %lu resets\n", engine->name, test_name, count); - reported = i915_reset_engine_count(global, engine); - reported -= threads[engine->id].resets; - if (reported != count) { - pr_err("i915_reset_engine(%s:%s): reset %lu times, but reported %lu\n", - engine->name, test_name, count, reported); - if (!err) - err = -EINVAL; + /* GuC based resets are not logged per engine */ + if (!using_guc) { + reported = i915_reset_engine_count(global, engine); + reported -= threads[engine->id].resets; + if (reported != count) { + pr_err("i915_reset_engine(%s:%s): reset %lu times, but reported %lu\n", + engine->name, test_name, count, reported); + if (!err) + err = -EINVAL; + } } unwind: @@ -1078,15 +1195,18 @@ static int __igt_reset_engines(struct intel_gt *gt, } put_task_struct(threads[tmp].task); - if (other->uabi_class != engine->uabi_class && - threads[tmp].resets != - i915_reset_engine_count(global, other)) { - pr_err("Innocent engine %s was reset (count=%ld)\n", - other->name, - i915_reset_engine_count(global, other) - - threads[tmp].resets); - if (!err) - err = -EINVAL; + /* GuC based resets are not logged per engine */ + if (!using_guc) { + if (other->uabi_class != engine->uabi_class && + threads[tmp].resets != + i915_reset_engine_count(global, other)) { + pr_err("Innocent engine %s was reset (count=%ld)\n", + other->name, + i915_reset_engine_count(global, other) - + threads[tmp].resets); + if (!err) + err = -EINVAL; + } } } @@ -1101,8 +1221,10 @@ static int __igt_reset_engines(struct intel_gt *gt, break; err = igt_flush_test(gt->i915); - if (err) + if (err) { + pr_err("[%s] Flush failed: %d!\n", engine->name, err); break; + } } if (intel_gt_is_wedged(gt)) @@ -1180,12 +1302,15 @@ static int igt_reset_wait(void *arg) igt_global_reset_lock(gt); err = hang_init(&h, gt); - if (err) + if (err) { + pr_err("[%s] Hang init failed: %d!\n", engine->name, err); goto unlock; + } rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); goto fini; } @@ -1310,12 +1435,15 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, /* Check that we can recover an unbind stuck on a hanging request */ err = hang_init(&h, gt); - if (err) + if (err) { + pr_err("[%s] Hang init failed: %d!\n", engine->name, err); return err; + } obj = i915_gem_object_create_internal(gt->i915, SZ_1M); if (IS_ERR(obj)) { err = PTR_ERR(obj); + pr_err("[%s] Create object failed: %d!\n", engine->name, err); goto fini; } @@ -1330,12 +1458,14 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, arg.vma = i915_vma_instance(obj, vm, NULL); if (IS_ERR(arg.vma)) { err = PTR_ERR(arg.vma); + pr_err("[%s] VMA instance failed: %d!\n", engine->name, err); goto out_obj; } rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); goto out_obj; } @@ -1347,6 +1477,7 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, err = i915_vma_pin(arg.vma, 0, 0, pin_flags); if (err) { i915_request_add(rq); + pr_err("[%s] VMA pin failed: %d!\n", engine->name, err); goto out_obj; } @@ -1363,8 +1494,14 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, i915_vma_lock(arg.vma); err = i915_request_await_object(rq, arg.vma->obj, flags & EXEC_OBJECT_WRITE); - if (err == 0) + if (err == 0) { err = i915_vma_move_to_active(arg.vma, rq, flags); + if (err) + pr_err("[%s] Move to active failed: %d!\n", engine->name, err); + } else { + pr_err("[%s] Request await failed: %d!\n", engine->name, err); + } + i915_vma_unlock(arg.vma); if (flags & EXEC_OBJECT_NEEDS_FENCE) @@ -1392,6 +1529,7 @@ static int __igt_reset_evict_vma(struct intel_gt *gt, tsk = kthread_run(fn, &arg, "igt/evict_vma"); if (IS_ERR(tsk)) { err = PTR_ERR(tsk); + pr_err("[%s] Thread spawn failed: %d!\n", engine->name, err); tsk = NULL; goto out_reset; } @@ -1508,17 +1646,29 @@ static int igt_reset_queue(void *arg) goto unlock; for_each_engine(engine, gt, id) { + struct intel_selftest_saved_policy saved; struct i915_request *prev; IGT_TIMEOUT(end_time); unsigned int count; + bool using_guc = intel_engine_uses_guc(engine); if (!intel_engine_can_store_dword(engine)) continue; + if (using_guc) { + err = intel_selftest_modify_policy(engine, &saved, + SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK); + if (err) { + pr_err("[%s] Modify policy failed: %d!\n", engine->name, err); + goto fini; + } + } + prev = hang_create_request(&h, engine); if (IS_ERR(prev)) { err = PTR_ERR(prev); - goto fini; + pr_err("[%s] Create 'prev' hang request failed: %d!\n", engine->name, err); + goto restore; } i915_request_get(prev); @@ -1532,7 +1682,8 @@ static int igt_reset_queue(void *arg) rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto fini; + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); + goto restore; } i915_request_get(rq); @@ -1557,7 +1708,7 @@ static int igt_reset_queue(void *arg) GEM_TRACE_DUMP(); intel_gt_set_wedged(gt); - goto fini; + goto restore; } if (!wait_until_running(&h, prev)) { @@ -1575,7 +1726,7 @@ static int igt_reset_queue(void *arg) intel_gt_set_wedged(gt); err = -EIO; - goto fini; + goto restore; } reset_count = fake_hangcheck(gt, BIT(id)); @@ -1586,7 +1737,7 @@ static int igt_reset_queue(void *arg) i915_request_put(rq); i915_request_put(prev); err = -EINVAL; - goto fini; + goto restore; } if (rq->fence.error) { @@ -1595,7 +1746,7 @@ static int igt_reset_queue(void *arg) i915_request_put(rq); i915_request_put(prev); err = -EINVAL; - goto fini; + goto restore; } if (i915_reset_count(global) == reset_count) { @@ -1603,7 +1754,7 @@ static int igt_reset_queue(void *arg) i915_request_put(rq); i915_request_put(prev); err = -EINVAL; - goto fini; + goto restore; } i915_request_put(prev); @@ -1618,9 +1769,22 @@ static int igt_reset_queue(void *arg) i915_request_put(prev); - err = igt_flush_test(gt->i915); +restore: + if (using_guc) { + int err2 = intel_selftest_restore_policy(engine, &saved); + if (err2) + pr_err("%s:%d> [%s] Restore policy failed: %d!\n", __func__, __LINE__, engine->name, err2); + if (err == 0) + err = err2; + } if (err) + goto fini; + + err = igt_flush_test(gt->i915); + if (err) { + pr_err("[%s] Flush failed: %d!\n", engine->name, err); break; + } } fini: @@ -1653,12 +1817,15 @@ static int igt_handle_error(void *arg) return 0; err = hang_init(&h, gt); - if (err) + if (err) { + pr_err("[%s] Hang init failed: %d!\n", engine->name, err); return err; + } rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); goto err_fini; } @@ -1743,12 +1910,15 @@ static int igt_atomic_reset_engine(struct intel_engine_cs *engine, return err; err = hang_init(&h, engine->gt); - if (err) + if (err) { + pr_err("[%s] Hang init failed: %d!\n", engine->name, err); return err; + } rq = hang_create_request(&h, engine); if (IS_ERR(rq)) { err = PTR_ERR(rq); + pr_err("[%s] Create hang request failed: %d!\n", engine->name, err); goto out; } diff --git a/drivers/gpu/drm/i915/gt/selftest_mocs.c b/drivers/gpu/drm/i915/gt/selftest_mocs.c index 8763bbeca0f7..13d25bf2a94a 100644 --- a/drivers/gpu/drm/i915/gt/selftest_mocs.c +++ b/drivers/gpu/drm/i915/gt/selftest_mocs.c @@ -10,6 +10,7 @@ #include "gem/selftests/mock_context.h" #include "selftests/igt_reset.h" #include "selftests/igt_spinner.h" +#include "selftests/intel_scheduler_helpers.h" struct live_mocs { struct drm_i915_mocs_table table; @@ -318,7 +319,8 @@ static int live_mocs_clean(void *arg) } static int active_engine_reset(struct intel_context *ce, - const char *reason) + const char *reason, + bool using_guc) { struct igt_spinner spin; struct i915_request *rq; @@ -335,9 +337,13 @@ static int active_engine_reset(struct intel_context *ce, } err = request_add_spin(rq, &spin); - if (err == 0) + if (err == 0 && !using_guc) err = intel_engine_reset(ce->engine, reason); + /* Ensure the reset happens and kills the engine */ + if (err == 0) + err = intel_selftest_wait_for_rq(rq); + igt_spinner_end(&spin); igt_spinner_fini(&spin); @@ -345,21 +351,23 @@ static int active_engine_reset(struct intel_context *ce, } static int __live_mocs_reset(struct live_mocs *mocs, - struct intel_context *ce) + struct intel_context *ce, bool using_guc) { struct intel_gt *gt = ce->engine->gt; int err; if (intel_has_reset_engine(gt)) { - err = intel_engine_reset(ce->engine, "mocs"); - if (err) - return err; - - err = check_mocs_engine(mocs, ce); - if (err) - return err; + if (!using_guc) { + err = intel_engine_reset(ce->engine, "mocs"); + if (err) + return err; + + err = check_mocs_engine(mocs, ce); + if (err) + return err; + } - err = active_engine_reset(ce, "mocs"); + err = active_engine_reset(ce, "mocs", using_guc); if (err) return err; @@ -395,19 +403,33 @@ static int live_mocs_reset(void *arg) igt_global_reset_lock(gt); for_each_engine(engine, gt, id) { + bool using_guc = intel_engine_uses_guc(engine); + struct intel_selftest_saved_policy saved; struct intel_context *ce; + int err2; + + err = intel_selftest_modify_policy(engine, &saved, + SELFTEST_SCHEDULER_MODIFY_FAST_RESET); + if (err) + break; ce = mocs_context_create(engine); if (IS_ERR(ce)) { err = PTR_ERR(ce); - break; + goto restore; } intel_engine_pm_get(engine); - err = __live_mocs_reset(&mocs, ce); - intel_engine_pm_put(engine); + err = __live_mocs_reset(&mocs, ce, using_guc); + + intel_engine_pm_put(engine); intel_context_put(ce); + +restore: + err2 = intel_selftest_restore_policy(engine, &saved); + if (err == 0) + err = err2; if (err) break; } diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c index 7ebc4edb8ecf..d820f0b41634 100644 --- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c +++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c @@ -12,6 +12,7 @@ #include "selftests/igt_flush_test.h" #include "selftests/igt_reset.h" #include "selftests/igt_spinner.h" +#include "selftests/intel_scheduler_helpers.h" #include "selftests/mock_drm.h" #include "gem/selftests/igt_gem_utils.h" @@ -261,28 +262,34 @@ static int do_engine_reset(struct intel_engine_cs *engine) return intel_engine_reset(engine, "live_workarounds"); } +static int do_guc_reset(struct intel_engine_cs *engine) +{ + /* Currently a no-op as the reset is handled by GuC */ + return 0; +} + static int switch_to_scratch_context(struct intel_engine_cs *engine, - struct igt_spinner *spin) + struct igt_spinner *spin, + struct i915_request **rq) { struct intel_context *ce; - struct i915_request *rq; int err = 0; ce = intel_context_create(engine); if (IS_ERR(ce)) return PTR_ERR(ce); - rq = igt_spinner_create_request(spin, ce, MI_NOOP); + *rq = igt_spinner_create_request(spin, ce, MI_NOOP); intel_context_put(ce); - if (IS_ERR(rq)) { + if (IS_ERR(*rq)) { spin = NULL; - err = PTR_ERR(rq); + err = PTR_ERR(*rq); goto err; } - err = request_add_spin(rq, spin); + err = request_add_spin(*rq, spin); err: if (err && spin) igt_spinner_end(spin); @@ -296,6 +303,7 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine, { struct intel_context *ce, *tmp; struct igt_spinner spin; + struct i915_request *rq; intel_wakeref_t wakeref; int err; @@ -316,13 +324,24 @@ static int check_whitelist_across_reset(struct intel_engine_cs *engine, goto out_spin; } - err = switch_to_scratch_context(engine, &spin); + err = switch_to_scratch_context(engine, &spin, &rq); if (err) goto out_spin; + /* Ensure the spinner hasn't aborted */ + if (i915_request_completed(rq)) { + pr_err("%s spinner failed to start\n", name); + err = -ETIMEDOUT; + goto out_spin; + } + with_intel_runtime_pm(engine->uncore->rpm, wakeref) err = reset(engine); + /* Ensure the reset happens and kills the engine */ + if (err == 0) + err = intel_selftest_wait_for_rq(rq); + igt_spinner_end(&spin); if (err) { @@ -787,9 +806,27 @@ static int live_reset_whitelist(void *arg) continue; if (intel_has_reset_engine(gt)) { - err = check_whitelist_across_reset(engine, - do_engine_reset, - "engine"); + if (intel_engine_uses_guc(engine)) { + struct intel_selftest_saved_policy saved; + int err2; + + err = intel_selftest_modify_policy(engine, &saved, + SELFTEST_SCHEDULER_MODIFY_FAST_RESET); + if(err) + goto out; + + err = check_whitelist_across_reset(engine, + do_guc_reset, + "guc"); + + err2 = intel_selftest_restore_policy(engine, &saved); + if (err == 0) + err = err2; + } else + err = check_whitelist_across_reset(engine, + do_engine_reset, + "engine"); + if (err) goto out; } @@ -1226,31 +1263,42 @@ live_engine_reset_workarounds(void *arg) reference_lists_init(gt, &lists); for_each_engine(engine, gt, id) { + struct intel_selftest_saved_policy saved; + bool using_guc = intel_engine_uses_guc(engine); bool ok; + int ret2; pr_info("Verifying after %s reset...\n", engine->name); + ret = intel_selftest_modify_policy(engine, &saved, + SELFTEST_SCHEDULER_MODIFY_FAST_RESET); + if (ret) + break; + + ce = intel_context_create(engine); if (IS_ERR(ce)) { ret = PTR_ERR(ce); - break; + goto restore; } - ok = verify_wa_lists(gt, &lists, "before reset"); - if (!ok) { - ret = -ESRCH; - goto err; - } + if (!using_guc) { + ok = verify_wa_lists(gt, &lists, "before reset"); + if (!ok) { + ret = -ESRCH; + goto err; + } - ret = intel_engine_reset(engine, "live_workarounds:idle"); - if (ret) { - pr_err("%s: Reset failed while idle\n", engine->name); - goto err; - } + ret = intel_engine_reset(engine, "live_workarounds:idle"); + if (ret) { + pr_err("%s: Reset failed while idle\n", engine->name); + goto err; + } - ok = verify_wa_lists(gt, &lists, "after idle reset"); - if (!ok) { - ret = -ESRCH; - goto err; + ok = verify_wa_lists(gt, &lists, "after idle reset"); + if (!ok) { + ret = -ESRCH; + goto err; + } } ret = igt_spinner_init(&spin, engine->gt); @@ -1271,25 +1319,41 @@ live_engine_reset_workarounds(void *arg) goto err; } - ret = intel_engine_reset(engine, "live_workarounds:active"); - if (ret) { - pr_err("%s: Reset failed on an active spinner\n", - engine->name); - igt_spinner_fini(&spin); - goto err; + /* Ensure the spinner hasn't aborted */ + if (i915_request_completed(rq)) { + ret = -ETIMEDOUT; + goto skip; + } + + if (!using_guc) { + ret = intel_engine_reset(engine, "live_workarounds:active"); + if (ret) { + pr_err("%s: Reset failed on an active spinner\n", + engine->name); + igt_spinner_fini(&spin); + goto err; + } } + /* Ensure the reset happens and kills the engine */ + if (ret == 0) + ret = intel_selftest_wait_for_rq(rq); + +skip: igt_spinner_end(&spin); igt_spinner_fini(&spin); ok = verify_wa_lists(gt, &lists, "after busy reset"); - if (!ok) { + if (!ok) ret = -ESRCH; - goto err; - } err: intel_context_put(ce); + +restore: + ret2 = intel_selftest_restore_policy(engine, &saved); + if (ret == 0) + ret = ret2; if (ret) break; } diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h index 2d6198e63ebe..596cf4b818e5 100644 --- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h +++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h @@ -124,10 +124,25 @@ enum intel_guc_action { INTEL_GUC_ACTION_FORCE_LOG_BUFFER_FLUSH = 0x302, INTEL_GUC_ACTION_ENTER_S_STATE = 0x501, INTEL_GUC_ACTION_EXIT_S_STATE = 0x502, + INTEL_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506, + INTEL_GUC_ACTION_SCHED_CONTEXT = 0x1000, + INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001, + INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002, + INTEL_GUC_ACTION_SCHED_ENGINE_MODE_SET = 0x1003, + INTEL_GUC_ACTION_SCHED_ENGINE_MODE_DONE = 0x1004, + INTEL_GUC_ACTION_SET_CONTEXT_PRIORITY = 0x1005, + INTEL_GUC_ACTION_SET_CONTEXT_EXECUTION_QUANTUM = 0x1006, + INTEL_GUC_ACTION_SET_CONTEXT_PREEMPTION_TIMEOUT = 0x1007, + INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION = 0x1008, + INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION = 0x1009, INTEL_GUC_ACTION_SLPC_REQUEST = 0x3003, INTEL_GUC_ACTION_AUTHENTICATE_HUC = 0x4000, + INTEL_GUC_ACTION_REGISTER_CONTEXT = 0x4502, + INTEL_GUC_ACTION_DEREGISTER_CONTEXT = 0x4503, INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505, INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506, + INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600, + INTEL_GUC_ACTION_RESET_CLIENT = 0x5B01, INTEL_GUC_ACTION_LIMIT }; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c index 6661dcb02239..979128e28372 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c @@ -180,6 +180,11 @@ void intel_guc_init_early(struct intel_guc *guc) } } +void intel_guc_init_late(struct intel_guc *guc) +{ + intel_guc_ads_init_late(guc); +} + static u32 guc_ctl_debug_flags(struct intel_guc *guc) { u32 level = intel_guc_log_get_level(&guc->log); @@ -524,65 +529,35 @@ int intel_guc_auth_huc(struct intel_guc *guc, u32 rsa_offset) */ int intel_guc_suspend(struct intel_guc *guc) { - struct intel_uncore *uncore = guc_to_gt(guc)->uncore; int ret; - u32 status; u32 action[] = { - INTEL_GUC_ACTION_ENTER_S_STATE, - GUC_POWER_D1, /* any value greater than GUC_POWER_D0 */ + INTEL_GUC_ACTION_RESET_CLIENT, }; - /* - * If GuC communication is enabled but submission is not supported, - * we do not need to suspend the GuC. - */ - if (!intel_guc_submission_is_used(guc) || !intel_guc_is_ready(guc)) + if (!intel_guc_is_ready(guc)) return 0; - /* - * The ENTER_S_STATE action queues the save/restore operation in GuC FW - * and then returns, so waiting on the H2G is not enough to guarantee - * GuC is done. When all the processing is done, GuC writes - * INTEL_GUC_SLEEP_STATE_SUCCESS to scratch register 14, so we can poll - * on that. Note that GuC does not ensure that the value in the register - * is different from INTEL_GUC_SLEEP_STATE_SUCCESS while the action is - * in progress so we need to take care of that ourselves as well. - */ - - intel_uncore_write(uncore, SOFT_SCRATCH(14), - INTEL_GUC_SLEEP_STATE_INVALID_MASK); - - ret = intel_guc_send(guc, action, ARRAY_SIZE(action)); - if (ret) - return ret; - - ret = __intel_wait_for_register(uncore, SOFT_SCRATCH(14), - INTEL_GUC_SLEEP_STATE_INVALID_MASK, - 0, 0, 10, &status); - if (ret) - return ret; - - if (status != INTEL_GUC_SLEEP_STATE_SUCCESS) { - DRM_ERROR("GuC failed to change sleep state. " - "action=0x%x, err=%u\n", - action[0], status); - return -EIO; + if (intel_guc_submission_is_used(guc)) { + /* + * This H2G MMIO command tears down the GuC in two steps. First it will + * generate a G2H CTB for every active context indicating a reset. In + * practice the i915 shouldn't ever get a G2H as suspend should only be + * called when the GPU is idle. Next, it tears down the CTBs and this + * H2G MMIO command completes. + * + * Don't abort on a failure code from the GuC. Keep going and do the + * clean up in santize() and re-initialisation on resume and hopefully + * the error here won't be problematic. + */ + ret = intel_guc_send_mmio(guc, action, ARRAY_SIZE(action), NULL, 0); + if (ret) + DRM_ERROR("GuC suspend: RESET_CLIENT action failed with error %d!\n", ret); } - return 0; -} + /* Signal that the GuC isn't running. */ + intel_guc_sanitize(guc); -/** - * intel_guc_reset_engine() - ask GuC to reset an engine - * @guc: intel_guc structure - * @engine: engine to be reset - */ -int intel_guc_reset_engine(struct intel_guc *guc, - struct intel_engine_cs *engine) -{ - /* XXX: to be implemented with submission interface rework */ - - return -ENODEV; + return 0; } /** @@ -591,7 +566,12 @@ int intel_guc_reset_engine(struct intel_guc *guc, */ int intel_guc_resume(struct intel_guc *guc) { - /* XXX: to be implemented with submission interface rework */ + /* + * NB: This function can still be called even if GuC submission is + * disabled, e.g. if GuC is enabled for HuC authentication only. Thus, + * if any code is later added here, it must be support doing nothing + * if submission is disabled (as per intel_guc_suspend). + */ return 0; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 72e4653222e2..5d94cf482516 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -6,6 +6,9 @@ #ifndef _INTEL_GUC_H_ #define _INTEL_GUC_H_ +#include +#include + #include "intel_uncore.h" #include "intel_guc_fw.h" #include "intel_guc_fwif.h" @@ -28,23 +31,43 @@ struct intel_guc { struct intel_guc_log log; struct intel_guc_ct ct; + /* Global engine used to submit requests to GuC */ + struct i915_sched_engine *sched_engine; + struct i915_request *stalled_request; + /* intel_guc_recv interrupt related state */ spinlock_t irq_lock; unsigned int msg_enabled_mask; + atomic_t outstanding_submission_g2h; + struct { void (*reset)(struct intel_guc *guc); void (*enable)(struct intel_guc *guc); void (*disable)(struct intel_guc *guc); } interrupts; + /* + * contexts_lock protects the pool of free guc ids and a linked list of + * guc ids available to be stolen + */ + spinlock_t contexts_lock; + struct ida guc_ids; + struct list_head guc_id_list; + + bool submission_supported; bool submission_selected; struct i915_vma *ads_vma; struct __guc_ads_blob *ads_blob; + u32 ads_regset_size; + u32 ads_golden_ctxt_size; + + struct i915_vma *lrc_desc_pool; + void *lrc_desc_pool_vaddr; - struct i915_vma *stage_desc_pool; - void *stage_desc_pool_vaddr; + /* guc_id to intel_context lookup */ + struct xarray context_lookup; /* Control params for fw initialization */ u32 params[GUC_CTL_MAX_DWORDS]; @@ -78,10 +101,11 @@ inline int intel_guc_send(struct intel_guc *guc, const u32 *action, u32 len) } static -inline int intel_guc_send_nb(struct intel_guc *guc, const u32 *action, u32 len) +inline int intel_guc_send_nb(struct intel_guc *guc, const u32 *action, u32 len, + u32 g2h_len_dw) { return intel_guc_ct_send(&guc->ct, action, len, NULL, 0, - INTEL_GUC_CT_SEND_NB); + MAKE_SEND_FLAGS(g2h_len_dw)); } static inline int @@ -92,6 +116,35 @@ intel_guc_send_and_receive(struct intel_guc *guc, const u32 *action, u32 len, response_buf, response_buf_size, 0); } +static inline int intel_guc_send_busy_loop(struct intel_guc* guc, + const u32 *action, + u32 len, + u32 g2h_len_dw, + bool loop) +{ + int err; + unsigned int sleep_period_ms = 1; + bool not_atomic = !in_atomic() && !irqs_disabled(); + + /* No sleeping with spin locks, just busy loop */ + might_sleep_if(loop && not_atomic); + +retry: + err = intel_guc_send_nb(guc, action, len, g2h_len_dw); + if (unlikely(err == -EBUSY && loop)) { + if (likely(not_atomic)) { + if (msleep_interruptible(sleep_period_ms)) + return -EINTR; + sleep_period_ms = sleep_period_ms << 1; + } else { + cpu_relax(); + } + goto retry; + } + + return err; +} + static inline void intel_guc_to_host_event_handler(struct intel_guc *guc) { intel_guc_ct_event_handler(&guc->ct); @@ -125,6 +178,7 @@ static inline u32 intel_guc_ggtt_offset(struct intel_guc *guc, } void intel_guc_init_early(struct intel_guc *guc); +void intel_guc_init_late(struct intel_guc *guc); void intel_guc_init_send_regs(struct intel_guc *guc); void intel_guc_write_params(struct intel_guc *guc); int intel_guc_init(struct intel_guc *guc); @@ -167,9 +221,25 @@ static inline bool intel_guc_is_ready(struct intel_guc *guc) return intel_guc_is_fw_running(guc) && intel_guc_ct_enabled(&guc->ct); } +static inline void intel_guc_reset_interrupts(struct intel_guc *guc) +{ + guc->interrupts.reset(guc); +} + +static inline void intel_guc_enable_interrupts(struct intel_guc *guc) +{ + guc->interrupts.enable(guc); +} + +static inline void intel_guc_disable_interrupts(struct intel_guc *guc) +{ + guc->interrupts.disable(guc); +} + static inline int intel_guc_sanitize(struct intel_guc *guc) { intel_uc_fw_sanitize(&guc->fw); + intel_guc_disable_interrupts(guc); intel_guc_ct_sanitize(&guc->ct); guc->mmio_msg = 0; @@ -190,8 +260,27 @@ static inline void intel_guc_disable_msg(struct intel_guc *guc, u32 mask) spin_unlock_irq(&guc->irq_lock); } -int intel_guc_reset_engine(struct intel_guc *guc, - struct intel_engine_cs *engine); +int intel_guc_wait_for_idle(struct intel_guc *guc, long timeout); + +int intel_guc_deregister_done_process_msg(struct intel_guc *guc, + const u32 *msg, u32 len); +int intel_guc_sched_done_process_msg(struct intel_guc *guc, + const u32 *msg, u32 len); +int intel_guc_context_reset_process_msg(struct intel_guc *guc, + const u32 *msg, u32 len); +int intel_guc_engine_failure_process_msg(struct intel_guc *guc, + const u32 *msg, u32 len); + +void intel_guc_find_hung_context(struct intel_engine_cs *engine); + +int intel_guc_global_policies_update(struct intel_guc *guc); + +void intel_guc_context_ban(struct intel_context *ce, struct i915_request *rq); + +void intel_guc_submission_reset_prepare(struct intel_guc *guc); +void intel_guc_submission_reset(struct intel_guc *guc, bool stalled); +void intel_guc_submission_reset_finish(struct intel_guc *guc); +void intel_guc_submission_cancel_requests(struct intel_guc *guc); void intel_guc_load_status(struct intel_guc *guc, struct drm_printer *p); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c index b82145652d57..c56302dedb32 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c @@ -3,8 +3,11 @@ * Copyright © 2014-2019 Intel Corporation */ +#include + #include "gt/intel_gt.h" #include "gt/intel_lrc.h" +#include "gt/shmem_utils.h" #include "intel_guc_ads.h" #include "intel_guc_fwif.h" #include "intel_uc.h" @@ -23,6 +26,15 @@ * | guc_policies | * +---------------------------------------+ * | guc_gt_system_info | + * +---------------------------------------+ <== static + * | guc_mmio_reg[countA] (engine 0.0) | + * | guc_mmio_reg[countB] (engine 0.1) | + * | guc_mmio_reg[countC] (engine 1.0) | + * | ... | + * +---------------------------------------+ <== dynamic + * | padding | + * +---------------------------------------+ <== 4K aligned + * | golden contexts | * +---------------------------------------+ * | padding | * +---------------------------------------+ <== 4K aligned @@ -35,16 +47,49 @@ struct __guc_ads_blob { struct guc_ads ads; struct guc_policies policies; struct guc_gt_system_info system_info; + /* From here on, location is dynamic! Refer to above diagram. */ + struct guc_mmio_reg regset[0]; } __packed; +static u32 guc_ads_regset_size(struct intel_guc *guc) +{ + GEM_BUG_ON(!guc->ads_regset_size); + return guc->ads_regset_size; +} + +static u32 guc_ads_golden_ctxt_size(struct intel_guc *guc) +{ + return PAGE_ALIGN(guc->ads_golden_ctxt_size); +} + static u32 guc_ads_private_data_size(struct intel_guc *guc) { return PAGE_ALIGN(guc->fw.private_data_size); } +static u32 guc_ads_regset_offset(struct intel_guc *guc) +{ + return offsetof(struct __guc_ads_blob, regset); +} + +static u32 guc_ads_golden_ctxt_offset(struct intel_guc *guc) +{ + u32 offset; + + offset = guc_ads_regset_offset(guc) + + guc_ads_regset_size(guc); + + return PAGE_ALIGN(offset); +} + static u32 guc_ads_private_data_offset(struct intel_guc *guc) { - return PAGE_ALIGN(sizeof(struct __guc_ads_blob)); + u32 offset; + + offset = guc_ads_golden_ctxt_offset(guc) + + guc_ads_golden_ctxt_size(guc); + + return PAGE_ALIGN(offset); } static u32 guc_ads_blob_size(struct intel_guc *guc) @@ -53,15 +98,68 @@ static u32 guc_ads_blob_size(struct intel_guc *guc) guc_ads_private_data_size(guc); } -static void guc_policies_init(struct guc_policies *policies) +static void guc_policies_init(struct intel_guc *guc, struct guc_policies *policies) { + struct intel_gt *gt = guc_to_gt(guc); + struct drm_i915_private *i915 = gt->i915; + policies->dpc_promote_time = GLOBAL_POLICY_DEFAULT_DPC_PROMOTE_TIME_US; policies->max_num_work_items = GLOBAL_POLICY_MAX_NUM_WI; - /* Disable automatic resets as not yet supported. */ - policies->global_flags = GLOBAL_POLICY_DISABLE_ENGINE_RESET; + + policies->global_flags = 0; + if (i915->params.reset < 2) + policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET; + policies->is_valid = 1; } +void intel_guc_ads_print_policy_info(struct intel_guc *guc, + struct drm_printer *dp) +{ + struct __guc_ads_blob *blob = guc->ads_blob; + + if (unlikely(!blob)) + return; + + drm_printf(dp, "Global scheduling policies:\n"); + drm_printf(dp, " DPC promote time = %u\n", blob->policies.dpc_promote_time); + drm_printf(dp, " Max num work items = %u\n", blob->policies.max_num_work_items); + drm_printf(dp, " Flags = %u\n", blob->policies.global_flags); +} + +static int guc_action_policies_update(struct intel_guc *guc, u32 policy_offset) +{ + u32 action[] = { + INTEL_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE, + policy_offset + }; + + return intel_guc_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true); +} + +int intel_guc_global_policies_update(struct intel_guc *guc) +{ + struct __guc_ads_blob *blob = guc->ads_blob; + struct intel_gt *gt = guc_to_gt(guc); + intel_wakeref_t wakeref; + int ret; + + if (!blob) + return -ENOTSUPP; + + GEM_BUG_ON(!blob->ads.scheduler_policies); + + guc_policies_init(guc, &blob->policies); + + if (!intel_guc_is_ready(guc)) + return 0; + + with_intel_runtime_pm(>->i915->runtime_pm, wakeref) + ret = guc_action_policies_update(guc, blob->ads.scheduler_policies); + + return ret; +} + static void guc_mapping_table_init(struct intel_gt *gt, struct guc_gt_system_info *system_info) { @@ -84,53 +182,323 @@ static void guc_mapping_table_init(struct intel_gt *gt, } /* - * The first 80 dwords of the register state context, containing the - * execlists and ppgtt registers. + * The save/restore register list must be pre-calculated to a temporary + * buffer of driver defined size before it can be generated in place + * inside the ADS. */ -#define LR_HW_CONTEXT_SIZE (80 * sizeof(u32)) +#define MAX_MMIO_REGS 128 /* Arbitrary size, increase as needed */ +struct temp_regset { + struct guc_mmio_reg *registers; + u32 used; + u32 size; +}; -static void __guc_ads_init(struct intel_guc *guc) +static int guc_mmio_reg_cmp(const void *a, const void *b) +{ + const struct guc_mmio_reg *ra = a; + const struct guc_mmio_reg *rb = b; + + return (int)ra->offset - (int)rb->offset; +} + +static void guc_mmio_reg_add(struct temp_regset *regset, + u32 offset, u32 flags) +{ + u32 count = regset->used; + struct guc_mmio_reg reg = { + .offset = offset, + .flags = flags, + }; + struct guc_mmio_reg *slot; + + GEM_BUG_ON(count >= regset->size); + + /* + * The mmio list is built using separate lists within the driver. + * It's possible that at some point we may attempt to add the same + * register more than once. Do not consider this an error; silently + * move on if the register is already in the list. + */ + if (bsearch(®, regset->registers, count, + sizeof(reg), guc_mmio_reg_cmp)) + return; + + slot = ®set->registers[count]; + regset->used++; + *slot = reg; + + while (slot-- > regset->registers) { + GEM_BUG_ON(slot[0].offset == slot[1].offset); + if (slot[1].offset > slot[0].offset) + break; + + swap(slot[1], slot[0]); + } +} + +#define GUC_MMIO_REG_ADD(regset, reg, masked) \ + guc_mmio_reg_add(regset, \ + i915_mmio_reg_offset((reg)), \ + (masked) ? GUC_REGSET_MASKED : 0) + +static void guc_mmio_regset_init(struct temp_regset *regset, + struct intel_engine_cs *engine) +{ + const u32 base = engine->mmio_base; + struct i915_wa_list *wal = &engine->wa_list; + struct i915_wa *wa; + unsigned int i; + + regset->used = 0; + + GUC_MMIO_REG_ADD(regset, RING_MODE_GEN7(base), true); + GUC_MMIO_REG_ADD(regset, RING_HWS_PGA(base), false); + GUC_MMIO_REG_ADD(regset, RING_IMR(base), false); + + for (i = 0, wa = wal->list; i < wal->count; i++, wa++) + GUC_MMIO_REG_ADD(regset, wa->reg, wa->masked_reg); + + /* Be extra paranoid and include all whitelist registers. */ + for (i = 0; i < RING_MAX_NONPRIV_SLOTS; i++) + GUC_MMIO_REG_ADD(regset, + RING_FORCE_TO_NONPRIV(base, i), + false); + + /* add in local MOCS registers */ + for (i = 0; i < GEN9_LNCFCMOCS_REG_COUNT; i++) + GUC_MMIO_REG_ADD(regset, GEN9_LNCFCMOCS(i), false); +} + +static int guc_mmio_reg_state_query(struct intel_guc *guc) { struct intel_gt *gt = guc_to_gt(guc); - struct drm_i915_private *i915 = gt->i915; + struct intel_engine_cs *engine; + enum intel_engine_id id; + struct temp_regset temp_set; + u32 total; + + /* + * Need to actually build the list in order to filter out + * duplicates and other such data dependent constructions. + */ + temp_set.size = MAX_MMIO_REGS; + temp_set.registers = kmalloc_array(temp_set.size, + sizeof(*temp_set.registers), + GFP_KERNEL); + if (!temp_set.registers) + return -ENOMEM; + + total = 0; + for_each_engine(engine, gt, id) { + guc_mmio_regset_init(&temp_set, engine); + total += temp_set.used; + } + + kfree(temp_set.registers); + + return total * sizeof(struct guc_mmio_reg); +} + +static void guc_mmio_reg_state_init(struct intel_guc *guc, + struct __guc_ads_blob *blob) +{ + struct intel_gt *gt = guc_to_gt(guc); + struct intel_engine_cs *engine; + enum intel_engine_id id; + struct temp_regset temp_set; + struct guc_mmio_reg_set *ads_reg_set; + u32 addr_ggtt, offset; + u8 guc_class; + + offset = guc_ads_regset_offset(guc); + addr_ggtt = intel_guc_ggtt_offset(guc, guc->ads_vma) + offset; + temp_set.registers = (struct guc_mmio_reg *) (((u8 *) blob) + offset); + temp_set.size = guc->ads_regset_size / sizeof(temp_set.registers[0]); + + for_each_engine(engine, gt, id) { + /* Class index is checked in class converter */ + GEM_BUG_ON(engine->instance >= GUC_MAX_INSTANCES_PER_CLASS); + + guc_class = engine_class_to_guc_class(engine->class); + ads_reg_set = &blob->ads.reg_state_list[guc_class][engine->instance]; + + guc_mmio_regset_init(&temp_set, engine); + if (!temp_set.used) { + ads_reg_set->address = 0; + ads_reg_set->count = 0; + continue; + } + + ads_reg_set->address = addr_ggtt; + ads_reg_set->count = temp_set.used; + + temp_set.size -= temp_set.used; + temp_set.registers += temp_set.used; + addr_ggtt += temp_set.used * sizeof(struct guc_mmio_reg); + } + + GEM_BUG_ON(temp_set.size); +} + +static void fill_engine_enable_masks(struct intel_gt *gt, + struct guc_gt_system_info *info) +{ + info->engine_enabled_masks[GUC_RENDER_CLASS] = 1; + info->engine_enabled_masks[GUC_BLITTER_CLASS] = 1; + info->engine_enabled_masks[GUC_VIDEO_CLASS] = VDBOX_MASK(gt); + info->engine_enabled_masks[GUC_VIDEOENHANCE_CLASS] = VEBOX_MASK(gt); +} + +static int guc_prep_golden_context(struct intel_guc *guc, + struct __guc_ads_blob *blob) +{ + struct intel_gt *gt = guc_to_gt(guc); + u32 addr_ggtt, offset; + u32 total_size = 0, alloc_size, real_size; + u8 engine_class, guc_class; + struct guc_gt_system_info *info, local_info; + + /* + * Reserve the memory for the golden contexts and point GuC at it but + * leave it empty for now. The context data will be filled in later + * once there is something available to put there. + * + * Note that the HWSP and ring context are not included. + * + * Note also that the storage must be pinned in the GGTT, so that the + * address won't change after GuC has been told where to find it. The + * GuC will also validate that the LRC base + size fall within the + * allowed GGTT range. + */ + if (blob) { + offset = guc_ads_golden_ctxt_offset(guc); + addr_ggtt = intel_guc_ggtt_offset(guc, guc->ads_vma) + offset; + info = &blob->system_info; + } else { + memset(&local_info, 0, sizeof(local_info)); + info = &local_info; + fill_engine_enable_masks(gt, info); + } + + for (engine_class = 0; engine_class <= MAX_ENGINE_CLASS; ++engine_class) { + if (engine_class == OTHER_CLASS) + continue; + + guc_class = engine_class_to_guc_class(engine_class); + + if (!info->engine_enabled_masks[guc_class]) + continue; + + real_size = intel_engine_context_size(gt, engine_class); + alloc_size = PAGE_ALIGN(real_size); + total_size += alloc_size; + + if (!blob) + continue; + + blob->ads.eng_state_size[guc_class] = real_size; + blob->ads.golden_context_lrca[guc_class] = addr_ggtt; + addr_ggtt += alloc_size; + } + + if (!blob) + return total_size; + + GEM_BUG_ON(guc->ads_golden_ctxt_size != total_size); + return total_size; +} + +static struct intel_engine_cs *find_engine_state(struct intel_gt *gt, u8 engine_class) +{ + struct intel_engine_cs *engine; + enum intel_engine_id id; + + for_each_engine(engine, gt, id) { + if (engine->class != engine_class) + continue; + + if (!engine->default_state) + continue; + + return engine; + } + + return NULL; +} + +static void guc_init_golden_context(struct intel_guc *guc) +{ struct __guc_ads_blob *blob = guc->ads_blob; - const u32 skipped_size = LRC_PPHWSP_SZ * PAGE_SIZE + LR_HW_CONTEXT_SIZE; - u32 base; + struct intel_engine_cs *engine; + struct intel_gt *gt = guc_to_gt(guc); + u32 addr_ggtt, offset; + u32 total_size = 0, alloc_size, real_size; u8 engine_class, guc_class; + u8 *ptr; - /* GuC scheduling policies */ - guc_policies_init(&blob->policies); + /* Skip execlist and PPGTT registers + HWSP */ + const u32 lr_hw_context_size = 80 * sizeof(u32); + const u32 skip_size = LRC_PPHWSP_SZ * PAGE_SIZE + + lr_hw_context_size; + + if (!intel_uc_uses_guc_submission(>->uc)) + return; + + GEM_BUG_ON(!blob); /* - * GuC expects a per-engine-class context image and size - * (minus hwsp and ring context). The context image will be - * used to reinitialize engines after a reset. It must exist - * and be pinned in the GGTT, so that the address won't change after - * we have told GuC where to find it. The context size will be used - * to validate that the LRC base + size fall within allowed GGTT. + * Go back and fill in the golden context data now that it is + * available. */ + offset = guc_ads_golden_ctxt_offset(guc); + addr_ggtt = intel_guc_ggtt_offset(guc, guc->ads_vma) + offset; + ptr = ((u8 *) blob) + offset; + for (engine_class = 0; engine_class <= MAX_ENGINE_CLASS; ++engine_class) { if (engine_class == OTHER_CLASS) continue; guc_class = engine_class_to_guc_class(engine_class); - /* - * TODO: Set context pointer to default state to allow - * GuC to re-init guilty contexts after internal reset. - */ - blob->ads.golden_context_lrca[guc_class] = 0; - blob->ads.eng_state_size[guc_class] = - intel_engine_context_size(guc_to_gt(guc), - engine_class) - - skipped_size; + if (!blob->system_info.engine_enabled_masks[guc_class]) + continue; + + real_size = intel_engine_context_size(gt, engine_class); + alloc_size = PAGE_ALIGN(real_size); + total_size += alloc_size; + + engine = find_engine_state(gt, engine_class); + if (!engine) { + drm_err(>->i915->drm, "No engine state recorded for class %d!\n", engine_class); + blob->ads.eng_state_size[guc_class] = 0; + blob->ads.golden_context_lrca[guc_class] = 0; + continue; + } + + GEM_BUG_ON(blob->ads.eng_state_size[guc_class] != real_size); + GEM_BUG_ON(blob->ads.golden_context_lrca[guc_class] != addr_ggtt); + addr_ggtt += alloc_size; + + shmem_read(engine->default_state, skip_size, ptr + skip_size, + real_size - skip_size); + ptr += alloc_size; } + GEM_BUG_ON(guc->ads_golden_ctxt_size != total_size); +} + +static void __guc_ads_init(struct intel_guc *guc) +{ + struct intel_gt *gt = guc_to_gt(guc); + struct drm_i915_private *i915 = gt->i915; + struct __guc_ads_blob *blob = guc->ads_blob; + u32 base; + + /* GuC scheduling policies */ + guc_policies_init(guc, &blob->policies); + /* System info */ - blob->system_info.engine_enabled_masks[GUC_RENDER_CLASS] = 1; - blob->system_info.engine_enabled_masks[GUC_BLITTER_CLASS] = 1; - blob->system_info.engine_enabled_masks[GUC_VIDEO_CLASS] = VDBOX_MASK(gt); - blob->system_info.engine_enabled_masks[GUC_VIDEOENHANCE_CLASS] = VEBOX_MASK(gt); + fill_engine_enable_masks(gt, &blob->system_info); blob->system_info.generic_gt_sysinfo[GUC_GENERIC_GT_SYSINFO_SLICE_ENABLED] = hweight8(gt->info.sseu.slice_mask); @@ -145,6 +513,9 @@ static void __guc_ads_init(struct intel_guc *guc) GEN12_DOORBELLS_PER_SQIDI) + 1; } + /* Golden contexts for re-initialising after a watchdog reset */ + guc_prep_golden_context(guc, blob); + guc_mapping_table_init(guc_to_gt(guc), &blob->system_info); base = intel_guc_ggtt_offset(guc, guc->ads_vma); @@ -153,6 +524,9 @@ static void __guc_ads_init(struct intel_guc *guc) blob->ads.scheduler_policies = base + ptr_offset(blob, policies); blob->ads.gt_system_info = base + ptr_offset(blob, system_info); + /* MMIO save/restore list */ + guc_mmio_reg_state_init(guc, blob); + /* Private Data */ blob->ads.private_data = base + guc_ads_private_data_offset(guc); @@ -173,6 +547,19 @@ int intel_guc_ads_create(struct intel_guc *guc) GEM_BUG_ON(guc->ads_vma); + /* Need to calculate the reg state size dynamically: */ + ret = guc_mmio_reg_state_query(guc); + if (ret < 0) + return ret; + guc->ads_regset_size = ret; + + /* Likewise the golden contexts: */ + ret = guc_prep_golden_context(guc, NULL); + if (ret < 0) + return ret; + guc->ads_golden_ctxt_size = ret; + + /* Now the total size can be determined: */ size = guc_ads_blob_size(guc); ret = intel_guc_allocate_and_map_vma(guc, size, &guc->ads_vma, @@ -185,6 +572,18 @@ int intel_guc_ads_create(struct intel_guc *guc) return 0; } +void intel_guc_ads_init_late(struct intel_guc *guc) +{ + /* + * The golden context setup requires the saved engine state from + * __engines_record_defaults(). However, that requires engines to be + * operational which means the ADS must already have been configured. + * Fortunately, the golden context state is not needed until a hang + * occurs, so it can be filled in during this late init phase. + */ + guc_init_golden_context(guc); +} + void intel_guc_ads_destroy(struct intel_guc *guc) { i915_vma_unpin_and_release(&guc->ads_vma, I915_VMA_RELEASE_MAP); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h index b00d3ae1113a..3d85051d57e4 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h @@ -7,9 +7,13 @@ #define _INTEL_GUC_ADS_H_ struct intel_guc; +struct drm_printer; int intel_guc_ads_create(struct intel_guc *guc); void intel_guc_ads_destroy(struct intel_guc *guc); +void intel_guc_ads_init_late(struct intel_guc *guc); void intel_guc_ads_reset(struct intel_guc *guc); +void intel_guc_ads_print_policy_info(struct intel_guc *guc, + struct drm_printer *p); #endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c index 83ec60ea3f89..92976d205478 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c @@ -73,6 +73,7 @@ static inline struct drm_device *ct_to_drm(struct intel_guc_ct *ct) #define CTB_DESC_SIZE ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K) #define CTB_H2G_BUFFER_SIZE (SZ_4K) #define CTB_G2H_BUFFER_SIZE (4 * CTB_H2G_BUFFER_SIZE) +#define G2H_ROOM_BUFFER_SIZE (CTB_G2H_BUFFER_SIZE / 4) struct ct_request { struct list_head link; @@ -108,6 +109,7 @@ void intel_guc_ct_init_early(struct intel_guc_ct *ct) INIT_LIST_HEAD(&ct->requests.incoming); INIT_WORK(&ct->requests.worker, ct_incoming_request_worker_func); tasklet_setup(&ct->receive_tasklet, ct_receive_tasklet_func); + init_waitqueue_head(&ct->wq); } static inline const char *guc_ct_buffer_type_to_str(u32 type) @@ -129,23 +131,27 @@ static void guc_ct_buffer_desc_init(struct guc_ct_buffer_desc *desc) static void guc_ct_buffer_reset(struct intel_guc_ct_buffer *ctb) { + u32 space; + ctb->broken = false; ctb->tail = 0; ctb->head = 0; - ctb->space = CIRC_SPACE(ctb->tail, ctb->head, ctb->size); + space = CIRC_SPACE(ctb->tail, ctb->head, ctb->size) - ctb->resv_space; + atomic_set(&ctb->space, space); guc_ct_buffer_desc_init(ctb->desc); } static void guc_ct_buffer_init(struct intel_guc_ct_buffer *ctb, struct guc_ct_buffer_desc *desc, - u32 *cmds, u32 size_in_bytes) + u32 *cmds, u32 size_in_bytes, u32 resv_space) { GEM_BUG_ON(size_in_bytes % 4); ctb->desc = desc; ctb->cmds = cmds; ctb->size = size_in_bytes / 4; + ctb->resv_space = resv_space / 4; guc_ct_buffer_reset(ctb); } @@ -226,6 +232,7 @@ int intel_guc_ct_init(struct intel_guc_ct *ct) struct guc_ct_buffer_desc *desc; u32 blob_size; u32 cmds_size; + u32 resv_space; void *blob; u32 *cmds; int err; @@ -250,19 +257,23 @@ int intel_guc_ct_init(struct intel_guc_ct *ct) desc = blob; cmds = blob + 2 * CTB_DESC_SIZE; cmds_size = CTB_H2G_BUFFER_SIZE; - CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u\n", "send", - ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size); + resv_space = 0; + CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u/%u\n", "send", + ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size, + resv_space); - guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size); + guc_ct_buffer_init(&ct->ctbs.send, desc, cmds, cmds_size, resv_space); /* store pointers to desc and cmds for recv ctb */ desc = blob + CTB_DESC_SIZE; cmds = blob + 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE; cmds_size = CTB_G2H_BUFFER_SIZE; - CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u\n", "recv", - ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size); + resv_space = G2H_ROOM_BUFFER_SIZE; + CT_DEBUG(ct, "%s desc %#tx cmds %#tx size %u/%u\n", "recv", + ptrdiff(desc, blob), ptrdiff(cmds, blob), cmds_size, + resv_space); - guc_ct_buffer_init(&ct->ctbs.recv, desc, cmds, cmds_size); + guc_ct_buffer_init(&ct->ctbs.recv, desc, cmds, cmds_size, resv_space); return 0; } @@ -461,8 +472,8 @@ static int ct_write(struct intel_guc_ct *ct, /* update local copies */ ctb->tail = tail; - GEM_BUG_ON(ctb->space < len + GUC_CTB_HDR_LEN); - ctb->space -= len + GUC_CTB_HDR_LEN; + GEM_BUG_ON(atomic_read(&ctb->space) < len + GUC_CTB_HDR_LEN); + atomic_sub(len + GUC_CTB_HDR_LEN, &ctb->space); /* now update descriptor */ WRITE_ONCE(desc->tail, tail); @@ -537,6 +548,32 @@ static inline bool ct_deadlocked(struct intel_guc_ct *ct) return ret; } +static inline bool g2h_has_room(struct intel_guc_ct *ct, u32 g2h_len_dw) +{ + struct intel_guc_ct_buffer *ctb = &ct->ctbs.recv; + + /* + * We leave a certain amount of space in the G2H CTB buffer for + * unexpected G2H CTBs (e.g. logging, engine hang, etc...) + */ + return !g2h_len_dw || atomic_read(&ctb->space) >= g2h_len_dw; +} + +static inline void g2h_reserve_space(struct intel_guc_ct *ct, u32 g2h_len_dw) +{ + lockdep_assert_held(&ct->ctbs.send.lock); + + GEM_BUG_ON(!g2h_has_room(ct, g2h_len_dw)); + + if (g2h_len_dw) + atomic_sub(g2h_len_dw, &ct->ctbs.recv.space); +} + +static inline void g2h_release_space(struct intel_guc_ct *ct, u32 g2h_len_dw) +{ + atomic_add(g2h_len_dw, &ct->ctbs.recv.space); +} + static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw) { struct intel_guc_ct_buffer *ctb = &ct->ctbs.send; @@ -544,7 +581,7 @@ static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw) u32 head; u32 space; - if (ctb->space >= len_dw) + if (atomic_read(&ctb->space) >= len_dw) return true; head = READ_ONCE(desc->head); @@ -557,16 +594,16 @@ static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw) } space = CIRC_SPACE(ctb->tail, head, ctb->size); - ctb->space = space; + atomic_set(&ctb->space, space); return space >= len_dw; } -static int has_room_nb(struct intel_guc_ct *ct, u32 len_dw) +static int has_room_nb(struct intel_guc_ct *ct, u32 h2g_dw, u32 g2h_dw) { lockdep_assert_held(&ct->ctbs.send.lock); - if (unlikely(!h2g_has_room(ct, len_dw))) { + if (unlikely(!h2g_has_room(ct, h2g_dw) || !g2h_has_room(ct, g2h_dw))) { if (ct->stall_time == KTIME_MAX) ct->stall_time = ktime_get(); @@ -580,6 +617,9 @@ static int has_room_nb(struct intel_guc_ct *ct, u32 len_dw) return 0; } +#define G2H_LEN_DW(f) \ + FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) ? \ + FIELD_GET(INTEL_GUC_CT_SEND_G2H_DW_MASK, f) + GUC_CTB_HXG_MSG_MIN_LEN : 0 static int ct_send_nb(struct intel_guc_ct *ct, const u32 *action, u32 len, @@ -587,12 +627,13 @@ static int ct_send_nb(struct intel_guc_ct *ct, { struct intel_guc_ct_buffer *ctb = &ct->ctbs.send; unsigned long spin_flags; + u32 g2h_len_dw = G2H_LEN_DW(flags); u32 fence; int ret; spin_lock_irqsave(&ctb->lock, spin_flags); - ret = has_room_nb(ct, len + GUC_CTB_HDR_LEN); + ret = has_room_nb(ct, len + GUC_CTB_HDR_LEN, g2h_len_dw); if (unlikely(ret)) goto out; @@ -601,6 +642,7 @@ static int ct_send_nb(struct intel_guc_ct *ct, if (unlikely(ret)) goto out; + g2h_reserve_space(ct, g2h_len_dw); intel_guc_notify(ct_to_guc(ct)); out: @@ -632,11 +674,13 @@ static int ct_send(struct intel_guc_ct *ct, /* * We use a lazy spin wait loop here as we believe that if the CT * buffers are sized correctly the flow control condition should be - * rare. + * rare. Reserving the maximum size in the G2H credits as we don't know + * how big the response is going to be. */ retry: spin_lock_irqsave(&ctb->lock, flags); - if (unlikely(!h2g_has_room(ct, len + GUC_CTB_HDR_LEN))) { + if (unlikely(!h2g_has_room(ct, len + GUC_CTB_HDR_LEN) || + !g2h_has_room(ct, GUC_CTB_HXG_MSG_MAX_LEN))) { if (ct->stall_time == KTIME_MAX) ct->stall_time = ktime_get(); spin_unlock_irqrestore(&ctb->lock, flags); @@ -664,6 +708,7 @@ static int ct_send(struct intel_guc_ct *ct, spin_unlock(&ct->requests.lock); err = ct_write(ct, action, len, fence, 0); + g2h_reserve_space(ct, GUC_CTB_HXG_MSG_MAX_LEN); spin_unlock_irqrestore(&ctb->lock, flags); @@ -673,6 +718,7 @@ static int ct_send(struct intel_guc_ct *ct, intel_guc_notify(ct_to_guc(ct)); err = wait_for_ct_request_update(&request, status); + g2h_release_space(ct, GUC_CTB_HXG_MSG_MAX_LEN); if (unlikely(err)) goto unlink; @@ -711,7 +757,10 @@ int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 *action, u32 len, int ret; if (unlikely(!ct->enabled)) { - WARN(1, "Unexpected send: action=%#x\n", *action); + struct intel_guc *guc = ct_to_guc(ct); + struct intel_uc *uc = container_of(guc, struct intel_uc, guc); + + WARN(!uc->reset_in_progress, "Unexpected send: action=%#x\n", *action); return -ENODEV; } @@ -928,6 +977,19 @@ static int ct_process_request(struct intel_guc_ct *ct, struct ct_incoming_msg *r case INTEL_GUC_ACTION_DEFAULT: ret = intel_guc_to_host_process_recv_msg(guc, payload, len); break; + case INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE: + ret = intel_guc_deregister_done_process_msg(guc, payload, + len); + break; + case INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE: + ret = intel_guc_sched_done_process_msg(guc, payload, len); + break; + case INTEL_GUC_ACTION_CONTEXT_RESET_NOTIFICATION: + ret = intel_guc_context_reset_process_msg(guc, payload, len); + break; + case INTEL_GUC_ACTION_ENGINE_FAILURE_NOTIFICATION: + ret = intel_guc_engine_failure_process_msg(guc, payload, len); + break; default: ret = -EOPNOTSUPP; break; @@ -985,10 +1047,22 @@ static void ct_incoming_request_worker_func(struct work_struct *w) static int ct_handle_event(struct intel_guc_ct *ct, struct ct_incoming_msg *request) { const u32 *hxg = &request->msg[GUC_CTB_MSG_MIN_LEN]; + u32 action = FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, hxg[0]); unsigned long flags; GEM_BUG_ON(FIELD_GET(GUC_HXG_MSG_0_TYPE, hxg[0]) != GUC_HXG_TYPE_EVENT); + /* + * Adjusting the space must be done in IRQ or deadlock can occur as the + * CTB processing in the below workqueue can send CTBs which creates a + * circular dependency if the space was returned there. + */ + switch (action) { + case INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE: + case INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE: + g2h_release_space(ct, request->size); + } + spin_lock_irqsave(&ct->requests.lock, flags); list_add_tail(&request->link, &ct->requests.incoming); spin_unlock_irqrestore(&ct->requests.lock, flags); @@ -1106,3 +1180,25 @@ void intel_guc_ct_event_handler(struct intel_guc_ct *ct) ct_try_receive_message(ct); } + +void intel_guc_ct_print_info(struct intel_guc_ct *ct, + struct drm_printer *p) +{ + drm_printf(p, "CT %s\n", enableddisabled(ct->enabled)); + + if (!ct->enabled) + return; + + drm_printf(p, "H2G Space: %u\n", + atomic_read(&ct->ctbs.send.space) * 4); + drm_printf(p, "Head: %u\n", + ct->ctbs.send.desc->head); + drm_printf(p, "Tail: %u\n", + ct->ctbs.send.desc->tail); + drm_printf(p, "G2H Space: %u\n", + atomic_read(&ct->ctbs.recv.space) * 4); + drm_printf(p, "Head: %u\n", + ct->ctbs.recv.desc->head); + drm_printf(p, "Tail: %u\n", + ct->ctbs.recv.desc->tail); +} diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h index edd1bba0445d..7b34026d264a 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h @@ -10,11 +10,13 @@ #include #include #include +#include #include "intel_guc_fwif.h" struct i915_vma; struct intel_guc; +struct drm_printer; /** * DOC: Command Transport (CT). @@ -33,6 +35,7 @@ struct intel_guc; * @desc: pointer to the buffer descriptor * @cmds: pointer to the commands buffer * @size: size of the commands buffer in dwords + * @resv_space: reserved space in buffer in dwords * @head: local shadow copy of head in dwords * @tail: local shadow copy of tail in dwords * @space: local shadow copy of space in dwords @@ -43,9 +46,10 @@ struct intel_guc_ct_buffer { struct guc_ct_buffer_desc *desc; u32 *cmds; u32 size; + u32 resv_space; u32 tail; u32 head; - u32 space; + atomic_t space; bool broken; }; @@ -66,6 +70,9 @@ struct intel_guc_ct { struct tasklet_struct receive_tasklet; + /** @wq: wait queue for g2h chanenl */ + wait_queue_head_t wq; + struct { u16 last_fence; /* last fence used to send request */ @@ -97,8 +104,15 @@ static inline bool intel_guc_ct_enabled(struct intel_guc_ct *ct) } #define INTEL_GUC_CT_SEND_NB BIT(31) +#define INTEL_GUC_CT_SEND_G2H_DW_SHIFT 0 +#define INTEL_GUC_CT_SEND_G2H_DW_MASK (0xff << INTEL_GUC_CT_SEND_G2H_DW_SHIFT) +#define MAKE_SEND_FLAGS(len) \ + ({GEM_BUG_ON(!FIELD_FIT(INTEL_GUC_CT_SEND_G2H_DW_MASK, len)); \ + (FIELD_PREP(INTEL_GUC_CT_SEND_G2H_DW_MASK, len) | INTEL_GUC_CT_SEND_NB);}) int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 *action, u32 len, u32 *response_buf, u32 response_buf_size, u32 flags); void intel_guc_ct_event_handler(struct intel_guc_ct *ct); +void intel_guc_ct_print_info(struct intel_guc_ct *ct, struct drm_printer *p); + #endif /* _INTEL_GUC_CT_H_ */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c index fe7cb7b29a1e..72ddfff42f7d 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c @@ -9,6 +9,9 @@ #include "intel_guc.h" #include "intel_guc_debugfs.h" #include "intel_guc_log_debugfs.h" +#include "gt/uc/intel_guc_ct.h" +#include "gt/uc/intel_guc_ads.h" +#include "gt/uc/intel_guc_submission.h" static int guc_info_show(struct seq_file *m, void *data) { @@ -22,16 +25,36 @@ static int guc_info_show(struct seq_file *m, void *data) drm_puts(&p, "\n"); intel_guc_log_info(&guc->log, &p); - /* Add more as required ... */ + if (!intel_guc_submission_is_used(guc)) + return 0; + + intel_guc_ct_print_info(&guc->ct, &p); + intel_guc_submission_print_info(guc, &p); + intel_guc_ads_print_policy_info(guc, &p); return 0; } DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_info); +static int guc_registered_contexts_show(struct seq_file *m, void *data) +{ + struct intel_guc *guc = m->private; + struct drm_printer p = drm_seq_file_printer(m); + + if (!intel_guc_submission_is_used(guc)) + return -ENODEV; + + intel_guc_submission_print_context_info(guc, &p); + + return 0; +} +DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts); + void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root) { static const struct debugfs_gt_file files[] = { { "guc_info", &guc_info_fops, NULL }, + { "guc_registered_contexts", &guc_registered_contexts_fops, NULL }, }; if (!intel_guc_is_supported(guc)) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h index 617ec601648d..94bb1ca6f889 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h @@ -17,14 +17,21 @@ #include "abi/guc_communication_ctb_abi.h" #include "abi/guc_messages_abi.h" +/* Payload length only i.e. don't include G2H header length */ +#define G2H_LEN_DW_SCHED_CONTEXT_MODE_SET 2 +#define G2H_LEN_DW_DEREGISTER_CONTEXT 1 + +#define GUC_CONTEXT_DISABLE 0 +#define GUC_CONTEXT_ENABLE 1 + #define GUC_CLIENT_PRIORITY_KMD_HIGH 0 #define GUC_CLIENT_PRIORITY_HIGH 1 #define GUC_CLIENT_PRIORITY_KMD_NORMAL 2 #define GUC_CLIENT_PRIORITY_NORMAL 3 #define GUC_CLIENT_PRIORITY_NUM 4 -#define GUC_MAX_STAGE_DESCRIPTORS 1024 -#define GUC_INVALID_STAGE_ID GUC_MAX_STAGE_DESCRIPTORS +#define GUC_MAX_LRC_DESCRIPTORS 65535 +#define GUC_INVALID_LRC_ID GUC_MAX_LRC_DESCRIPTORS #define GUC_RENDER_ENGINE 0 #define GUC_VIDEO_ENGINE 1 @@ -175,66 +182,39 @@ struct guc_process_desc { u32 reserved[30]; } __packed; -/* engine id and context id is packed into guc_execlist_context.context_id*/ -#define GUC_ELC_CTXID_OFFSET 0 -#define GUC_ELC_ENGINE_OFFSET 29 +#define CONTEXT_REGISTRATION_FLAG_KMD BIT(0) -/* The execlist context including software and HW information */ -struct guc_execlist_context { - u32 context_desc; - u32 context_id; - u32 ring_status; - u32 ring_lrca; - u32 ring_begin; - u32 ring_end; - u32 ring_next_free_location; - u32 ring_current_tail_pointer_value; - u8 engine_state_submit_value; - u8 engine_state_wait_value; - u16 pagefault_count; - u16 engine_submit_queue_count; -} __packed; +#define CONTEXT_POLICY_DEFAULT_EXECUTION_QUANTUM_US 1000000 +#define CONTEXT_POLICY_DEFAULT_PREEMPTION_TIME_US 500000 + +/* Preempt to idle on quantum expiry */ +#define CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE BIT(0) /* - * This structure describes a stage set arranged for a particular communication - * between uKernel (GuC) and Driver (KMD). Technically, this is known as a - * "GuC Context descriptor" in the specs, but we use the term "stage descriptor" - * to avoid confusion with all the other things already named "context" in the - * driver. A static pool of these descriptors are stored inside a GEM object - * (stage_desc_pool) which is held for the entire lifetime of our interaction - * with the GuC, being allocated before the GuC is loaded with its firmware. + * GuC Context registration descriptor. + * FIXME: This is only required to exist during context registration. + * The current 1:1 between guc_lrc_desc and LRCs for the lifetime of the LRC + * is not required. */ -struct guc_stage_desc { - u32 sched_common_area; - u32 stage_id; - u32 pas_id; - u8 engines_used; - u64 db_trigger_cpu; - u32 db_trigger_uk; - u64 db_trigger_phy; - u16 db_id; - - struct guc_execlist_context lrc[GUC_MAX_ENGINES_NUM]; - - u8 attribute; - +struct guc_lrc_desc { + u32 hw_context_desc; + u32 slpm_perf_mode_hint; /* SPLC v1 only */ + u32 slpm_freq_hint; + u32 engine_submit_mask; /* In logical space */ + u8 engine_class; + u8 reserved0[3]; u32 priority; - - u32 wq_sampled_tail_offset; - u32 wq_total_submit_enqueues; - u32 process_desc; u32 wq_addr; u32 wq_size; - - u32 engine_presence; - - u8 engine_suspended; - - u8 reserved0[3]; - u64 reserved1[1]; - - u64 desc_private; + u32 context_flags; /* CONTEXT_REGISTRATION_* */ + /* Time for one workload to execute. (in micro seconds) */ + u32 execution_quantum; + /* Time to wait for a preemption request to complete before issuing a + * reset. (in micro seconds). */ + u32 preemption_timeout; + u32 policy_flags; /* CONTEXT_POLICY_* */ + u32 reserved1[19]; } __packed; #define GUC_POWER_UNSPECIFIED 0 diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index e9c237b18692..259f79dfe7bb 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -10,10 +10,13 @@ #include "gt/intel_breadcrumbs.h" #include "gt/intel_context.h" #include "gt/intel_engine_pm.h" +#include "gt/intel_engine_heartbeat.h" #include "gt/intel_gt.h" #include "gt/intel_gt_irq.h" #include "gt/intel_gt_pm.h" +#include "gt/intel_gt_requests.h" #include "gt/intel_lrc.h" +#include "gt/intel_lrc_reg.h" #include "gt/intel_mocs.h" #include "gt/intel_ring.h" @@ -58,246 +61,697 @@ * */ +/* GuC Virtual Engine */ +struct guc_virtual_engine { + struct intel_engine_cs base; + struct intel_context context; +}; + +static struct intel_context * +guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count); + #define GUC_REQUEST_SIZE 64 /* bytes */ -static inline struct i915_priolist *to_priolist(struct rb_node *rb) +/* + * Below is a set of functions which control the GuC scheduling state which do + * not require a lock as all state transitions are mutually exclusive. i.e. It + * is not possible for the context pinning code and submission, for the same + * context, to be executing simultaneously. We still need an atomic as it is + * possible for some of the bits to changing at the same time though. + */ +#define SCHED_STATE_NO_LOCK_ENABLED BIT(0) +#define SCHED_STATE_NO_LOCK_PENDING_ENABLE BIT(1) +#define SCHED_STATE_NO_LOCK_REGISTERED BIT(2) +#define SCHED_STATE_NO_LOCK_BLOCKED_SHIFT 3 +#define SCHED_STATE_NO_LOCK_BLOCKED \ + BIT(SCHED_STATE_NO_LOCK_BLOCKED_SHIFT) +#define SCHED_STATE_NO_LOCK_BLOCKED_MASK \ + (0xffff << SCHED_STATE_NO_LOCK_BLOCKED_SHIFT) +static inline bool context_enabled(struct intel_context *ce) { - return rb_entry(rb, struct i915_priolist, node); + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_ENABLED); +} + +static inline void set_context_enabled(struct intel_context *ce) +{ + atomic_or(SCHED_STATE_NO_LOCK_ENABLED, &ce->guc_sched_state_no_lock); +} + +static inline void clr_context_enabled(struct intel_context *ce) +{ + atomic_and((u32)~SCHED_STATE_NO_LOCK_ENABLED, + &ce->guc_sched_state_no_lock); +} + +static inline bool context_pending_enable(struct intel_context *ce) +{ + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_PENDING_ENABLE); } -static struct guc_stage_desc *__get_stage_desc(struct intel_guc *guc, u32 id) +static inline void set_context_pending_enable(struct intel_context *ce) { - struct guc_stage_desc *base = guc->stage_desc_pool_vaddr; + atomic_or(SCHED_STATE_NO_LOCK_PENDING_ENABLE, + &ce->guc_sched_state_no_lock); +} + +static inline void clr_context_pending_enable(struct intel_context *ce) +{ + atomic_and((u32)~SCHED_STATE_NO_LOCK_PENDING_ENABLE, + &ce->guc_sched_state_no_lock); +} + +static inline u32 context_blocked(struct intel_context *ce) +{ + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_BLOCKED_MASK) >> + SCHED_STATE_NO_LOCK_BLOCKED_SHIFT; +} + +static inline void incr_context_blocked(struct intel_context *ce) +{ + lockdep_assert_held(&ce->engine->sched_engine->lock); + atomic_add(SCHED_STATE_NO_LOCK_BLOCKED, + &ce->guc_sched_state_no_lock); +} - return &base[id]; +static inline void decr_context_blocked(struct intel_context *ce) +{ + lockdep_assert_held(&ce->engine->sched_engine->lock); + atomic_sub(SCHED_STATE_NO_LOCK_BLOCKED, + &ce->guc_sched_state_no_lock); } -static int guc_stage_desc_pool_create(struct intel_guc *guc) +static inline bool context_registered(struct intel_context *ce) { - u32 size = PAGE_ALIGN(sizeof(struct guc_stage_desc) * - GUC_MAX_STAGE_DESCRIPTORS); + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_REGISTERED); +} - return intel_guc_allocate_and_map_vma(guc, size, &guc->stage_desc_pool, - &guc->stage_desc_pool_vaddr); +static inline void set_context_registered(struct intel_context *ce) +{ + atomic_or(SCHED_STATE_NO_LOCK_REGISTERED, + &ce->guc_sched_state_no_lock); } -static void guc_stage_desc_pool_destroy(struct intel_guc *guc) +static inline void clr_context_registered(struct intel_context *ce) { - i915_vma_unpin_and_release(&guc->stage_desc_pool, I915_VMA_RELEASE_MAP); + atomic_and((u32)~SCHED_STATE_NO_LOCK_REGISTERED, + &ce->guc_sched_state_no_lock); } /* - * Initialise/clear the stage descriptor shared with the GuC firmware. - * - * This descriptor tells the GuC where (in GGTT space) to find the important - * data structures related to work submission (process descriptor, write queue, - * etc). + * Below is a set of functions which control the GuC scheduling state which + * require a lock, aside from the special case where the functions are called + * from guc_lrc_desc_pin(). In that case it isn't possible for any other code + * path to be executing on the context. */ -static void guc_stage_desc_init(struct intel_guc *guc) +#define SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER BIT(0) +#define SCHED_STATE_DESTROYED BIT(1) +#define SCHED_STATE_PENDING_DISABLE BIT(2) +#define SCHED_STATE_BANNED BIT(3) +static inline void init_sched_state(struct intel_context *ce) +{ + /* Only should be called from guc_lrc_desc_pin() */ + atomic_set(&ce->guc_sched_state_no_lock, 0); + ce->guc_state.sched_state = 0; +} + +static inline bool +context_wait_for_deregister_to_register(struct intel_context *ce) { - struct guc_stage_desc *desc; + return ce->guc_state.sched_state & + SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER; +} - /* we only use 1 stage desc, so hardcode it to 0 */ - desc = __get_stage_desc(guc, 0); - memset(desc, 0, sizeof(*desc)); +static inline void +set_context_wait_for_deregister_to_register(struct intel_context *ce) +{ + /* Only should be called from guc_lrc_desc_pin() without lock */ + ce->guc_state.sched_state |= + SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER; +} - desc->attribute = GUC_STAGE_DESC_ATTR_ACTIVE | - GUC_STAGE_DESC_ATTR_KERNEL; +static inline void +clr_context_wait_for_deregister_to_register(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + ce->guc_state.sched_state &= + ~SCHED_STATE_WAIT_FOR_DEREGISTER_TO_REGISTER; +} - desc->stage_id = 0; - desc->priority = GUC_CLIENT_PRIORITY_KMD_NORMAL; +static inline bool +context_destroyed(struct intel_context *ce) +{ + return ce->guc_state.sched_state & SCHED_STATE_DESTROYED; +} - desc->wq_size = GUC_WQ_SIZE; +static inline void +set_context_destroyed(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + ce->guc_state.sched_state |= SCHED_STATE_DESTROYED; } -static void guc_stage_desc_fini(struct intel_guc *guc) +static inline bool context_pending_disable(struct intel_context *ce) { - struct guc_stage_desc *desc; + return ce->guc_state.sched_state & SCHED_STATE_PENDING_DISABLE; +} - desc = __get_stage_desc(guc, 0); - memset(desc, 0, sizeof(*desc)); +static inline void set_context_pending_disable(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + ce->guc_state.sched_state |= SCHED_STATE_PENDING_DISABLE; } -static void guc_add_request(struct intel_guc *guc, struct i915_request *rq) +static inline void clr_context_pending_disable(struct intel_context *ce) { - /* Leaving stub as this function will be used in future patches */ + lockdep_assert_held(&ce->guc_state.lock); + ce->guc_state.sched_state &= ~SCHED_STATE_PENDING_DISABLE; } -/* - * When we're doing submissions using regular execlists backend, writing to - * ELSP from CPU side is enough to make sure that writes to ringbuffer pages - * pinned in mappable aperture portion of GGTT are visible to command streamer. - * Writes done by GuC on our behalf are not guaranteeing such ordering, - * therefore, to ensure the flush, we're issuing a POSTING READ. - */ -static void flush_ggtt_writes(struct i915_vma *vma) +static inline bool context_banned(struct intel_context *ce) { - if (i915_vma_is_map_and_fenceable(vma)) - intel_uncore_posting_read_fw(vma->vm->gt->uncore, - GUC_STATUS); + return ce->guc_state.sched_state & SCHED_STATE_BANNED; } -static void guc_submit(struct intel_engine_cs *engine, - struct i915_request **out, - struct i915_request **end) +static inline void set_context_banned(struct intel_context *ce) { - struct intel_guc *guc = &engine->gt->uc.guc; + lockdep_assert_held(&ce->guc_state.lock); + ce->guc_state.sched_state |= SCHED_STATE_BANNED; +} - do { - struct i915_request *rq = *out++; +static inline void clr_context_banned(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + ce->guc_state.sched_state &= ~SCHED_STATE_BANNED; +} - flush_ggtt_writes(rq->ring->vma); - guc_add_request(guc, rq); - } while (out != end); +static inline bool context_guc_id_invalid(struct intel_context *ce) +{ + return (ce->guc_id == GUC_INVALID_LRC_ID); } -static inline int rq_prio(const struct i915_request *rq) +static inline void set_context_guc_id_invalid(struct intel_context *ce) { - return rq->sched.attr.priority; + ce->guc_id = GUC_INVALID_LRC_ID; +} + +static inline struct intel_guc *ce_to_guc(struct intel_context *ce) +{ + return &ce->engine->gt->uc.guc; +} + +static inline struct i915_priolist *to_priolist(struct rb_node *rb) +{ + return rb_entry(rb, struct i915_priolist, node); +} + +static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index) +{ + struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr; + + GEM_BUG_ON(index >= GUC_MAX_LRC_DESCRIPTORS); + + return &base[index]; +} + +static inline struct intel_context *__get_context(struct intel_guc *guc, u32 id) +{ + struct intel_context *ce = xa_load(&guc->context_lookup, id); + + GEM_BUG_ON(id >= GUC_MAX_LRC_DESCRIPTORS); + + return ce; +} + +static int guc_lrc_desc_pool_create(struct intel_guc *guc) +{ + u32 size; + int ret; + + size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) * + GUC_MAX_LRC_DESCRIPTORS); + ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool, + (void **)&guc->lrc_desc_pool_vaddr); + if (ret) + return ret; + + return 0; } -static struct i915_request *schedule_in(struct i915_request *rq, int idx) +static void guc_lrc_desc_pool_destroy(struct intel_guc *guc) { - trace_i915_request_in(rq, idx); + guc->lrc_desc_pool_vaddr = NULL; + i915_vma_unpin_and_release(&guc->lrc_desc_pool, I915_VMA_RELEASE_MAP); +} + +static inline bool guc_submission_initialized(struct intel_guc *guc) +{ + return guc->lrc_desc_pool_vaddr != NULL; +} + +static inline void reset_lrc_desc(struct intel_guc *guc, u32 id) +{ + if (likely(guc_submission_initialized(guc))) { + struct guc_lrc_desc *desc = __get_lrc_desc(guc, id); + unsigned long flags; + + memset(desc, 0, sizeof(*desc)); + + /* + * xarray API doesn't have xa_erase_irqsave wrapper, so calling + * the lower level functions directly. + */ + xa_lock_irqsave(&guc->context_lookup, flags); + __xa_erase(&guc->context_lookup, id); + xa_unlock_irqrestore(&guc->context_lookup, flags); + } +} + +static inline bool lrc_desc_registered(struct intel_guc *guc, u32 id) +{ + return __get_context(guc, id); +} + +static inline void set_lrc_desc_registered(struct intel_guc *guc, u32 id, + struct intel_context *ce) +{ + unsigned long flags; /* - * Currently we are not tracking the rq->context being inflight - * (ce->inflight = rq->engine). It is only used by the execlists - * backend at the moment, a similar counting strategy would be - * required if we generalise the inflight tracking. + * xarray API doesn't have xa_save_irqsave wrapper, so calling the + * lower level functions directly. */ + xa_lock_irqsave(&guc->context_lookup, flags); + __xa_store(&guc->context_lookup, id, ce, GFP_ATOMIC); + xa_unlock_irqrestore(&guc->context_lookup, flags); +} + +static int guc_submission_send_busy_loop(struct intel_guc* guc, + const u32 *action, + u32 len, + u32 g2h_len_dw, + bool loop) +{ + int err; + + err = intel_guc_send_busy_loop(guc, action, len, g2h_len_dw, loop); - __intel_gt_pm_get(rq->engine->gt); - return i915_request_get(rq); + if (!err && g2h_len_dw) + atomic_inc(&guc->outstanding_submission_g2h); + + return err; } -static void schedule_out(struct i915_request *rq) +int intel_guc_wait_for_pending_msg(struct intel_guc *guc, + atomic_t *wait_var, + bool interruptible, + long timeout) { - trace_i915_request_out(rq); + const int state = interruptible ? + TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE; + DEFINE_WAIT(wait); + + might_sleep(); + GEM_BUG_ON(timeout < 0); + + if (!atomic_read(wait_var)) + return 0; + + if (!timeout) + return -ETIME; + + for (;;) { + prepare_to_wait(&guc->ct.wq, &wait, state); + + if (!atomic_read(wait_var)) + break; + + if (signal_pending_state(state, current)) { + timeout = -EINTR; + break; + } + + if (!timeout) { + timeout = -ETIME; + break; + } - intel_gt_pm_put_async(rq->engine->gt); - i915_request_put(rq); + timeout = io_schedule_timeout(timeout); + } + finish_wait(&guc->ct.wq, &wait); + + return (timeout < 0) ? timeout : 0; } -static void __guc_dequeue(struct intel_engine_cs *engine) +int intel_guc_wait_for_idle(struct intel_guc *guc, long timeout) { - struct intel_engine_execlists * const execlists = &engine->execlists; - struct i915_sched_engine * const sched_engine = engine->sched_engine; - struct i915_request **first = execlists->inflight; - struct i915_request ** const last_port = first + execlists->port_mask; - struct i915_request *last = first[0]; - struct i915_request **port; - bool submit = false; - struct rb_node *rb; + if (!intel_uc_uses_guc_submission(&guc_to_gt(guc)->uc)) + return 0; - lockdep_assert_held(&sched_engine->lock); + return intel_guc_wait_for_pending_msg(guc, + &guc->outstanding_submission_g2h, + true, timeout); +} - if (last) { - if (*++first) - return; +static int guc_lrc_desc_pin(struct intel_context *ce, bool loop); - last = NULL; +static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) +{ + int err = 0; + struct intel_context *ce = rq->context; + u32 action[3]; + int len = 0; + u32 g2h_len_dw = 0; + bool enabled; + + /* + * Corner case where requests were sitting in the priority list or a + * request resubmitted after the context was banned. + */ + if (unlikely(intel_context_is_banned(ce))) { + i915_request_put(i915_request_mark_eio(rq)); + intel_engine_signal_breadcrumbs(ce->engine); + goto out; } + GEM_BUG_ON(!atomic_read(&ce->guc_id_ref)); + GEM_BUG_ON(context_guc_id_invalid(ce)); + /* - * We write directly into the execlists->inflight queue and don't use - * the execlists->pending queue, as we don't have a distinct switch - * event. + * Corner case where the GuC firmware was blown away and reloaded while + * this context was pinned. */ - port = first; + if (unlikely(!lrc_desc_registered(guc, ce->guc_id))) { + err = guc_lrc_desc_pin(ce, false); + if (unlikely(err)) + goto out; + } + + if (unlikely(context_blocked(ce))) + goto out; + + enabled = context_enabled(ce); + + if (!enabled) { + action[len++] = INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET; + action[len++] = ce->guc_id; + action[len++] = GUC_CONTEXT_ENABLE; + set_context_pending_enable(ce); + intel_context_get(ce); + g2h_len_dw = G2H_LEN_DW_SCHED_CONTEXT_MODE_SET; + } else { + action[len++] = INTEL_GUC_ACTION_SCHED_CONTEXT; + action[len++] = ce->guc_id; + } + + err = intel_guc_send_nb(guc, action, len, g2h_len_dw); + if (!enabled && !err) { + trace_intel_context_sched_enable(ce); + atomic_inc(&guc->outstanding_submission_g2h); + set_context_enabled(ce); + } else if (!enabled) { + clr_context_pending_enable(ce); + intel_context_put(ce); + } + if (likely(!err)) + trace_i915_request_guc_submit(rq); + +out: + return err; +} + +static inline void guc_set_lrc_tail(struct i915_request *rq) +{ + rq->context->lrc_reg_state[CTX_RING_TAIL] = + intel_ring_set_tail(rq->ring, rq->tail); +} + +static inline int rq_prio(const struct i915_request *rq) +{ + return rq->sched.attr.priority; +} + +static int guc_dequeue_one_context(struct intel_guc *guc) +{ + struct i915_sched_engine * const sched_engine = guc->sched_engine; + struct i915_request *last = NULL; + bool submit = false; + struct rb_node *rb; + int ret; + + lockdep_assert_held(&sched_engine->lock); + + if (guc->stalled_request) { + submit = true; + last = guc->stalled_request; + goto resubmit; + } + while ((rb = rb_first_cached(&sched_engine->queue))) { struct i915_priolist *p = to_priolist(rb); struct i915_request *rq, *rn; priolist_for_each_request_consume(rq, rn, p) { - if (last && rq->context != last->context) { - if (port == last_port) - goto done; - - *port = schedule_in(last, - port - execlists->inflight); - port++; - } + if (last && rq->context != last->context) + goto done; list_del_init(&rq->sched.link); + __i915_request_submit(rq); - submit = true; + + trace_i915_request_in(rq, 0); last = rq; + submit = true; } rb_erase_cached(&p->node, &sched_engine->queue); i915_priolist_free(p); } done: - sched_engine->queue_priority_hint = - rb ? to_priolist(rb)->priority : INT_MIN; if (submit) { - *port = schedule_in(last, port - execlists->inflight); - *++port = NULL; - guc_submit(engine, first, port); + guc_set_lrc_tail(last); +resubmit: + ret = guc_add_request(guc, last); + if (unlikely(ret == -EPIPE)) + goto deadlk; + else if (ret == -EBUSY) { + tasklet_schedule(&sched_engine->tasklet); + guc->stalled_request = last; + return false; + } } - execlists->active = execlists->inflight; + + guc->stalled_request = NULL; + return submit; + +deadlk: + sched_engine->tasklet.callback = NULL; + tasklet_disable_nosync(&sched_engine->tasklet); + return false; } static void guc_submission_tasklet(struct tasklet_struct *t) { struct i915_sched_engine *sched_engine = from_tasklet(sched_engine, t, tasklet); - struct intel_engine_cs * const engine = sched_engine->private_data; - struct intel_engine_execlists * const execlists = &engine->execlists; - struct i915_request **port, *rq; unsigned long flags; + bool loop; + + spin_lock_irqsave(&sched_engine->lock, flags); - spin_lock_irqsave(&engine->sched_engine->lock, flags); + do { + loop = guc_dequeue_one_context(sched_engine->private_data); + } while (loop); - for (port = execlists->inflight; (rq = *port); port++) { - if (!i915_request_completed(rq)) - break; + i915_sched_engine_reset_on_empty(sched_engine); - schedule_out(rq); - } - if (port != execlists->inflight) { - int idx = port - execlists->inflight; - int rem = ARRAY_SIZE(execlists->inflight) - idx; - memmove(execlists->inflight, port, rem * sizeof(*port)); + spin_unlock_irqrestore(&sched_engine->lock, flags); +} + +static void cs_irq_handler(struct intel_engine_cs *engine, u16 iir) +{ + if (iir & GT_RENDER_USER_INTERRUPT) + intel_engine_signal_breadcrumbs(engine); +} + +static void __guc_context_destroy(struct intel_context *ce); +static void release_guc_id(struct intel_guc *guc, struct intel_context *ce); +static void guc_signal_context_fence(struct intel_context *ce); +static void guc_cancel_context_requests(struct intel_context *ce); +static void guc_blocked_fence_complete(struct intel_context *ce); + +static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc) +{ + struct intel_context *ce; + unsigned long index, flags; + bool pending_disable, pending_enable, deregister, destroyed, banned; + + xa_for_each(&guc->context_lookup, index, ce) { + /* Flush context */ + spin_lock_irqsave(&ce->guc_state.lock, flags); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + /* + * Once we are at this point submission_disabled() is guaranteed + * to visible to all callers who set the below flags (see above + * flush and flushes in reset_prepare). If submission_disabled() + * is set, the caller shouldn't set these flags. + */ + + destroyed = context_destroyed(ce); + pending_enable = context_pending_enable(ce); + pending_disable = context_pending_disable(ce); + deregister = context_wait_for_deregister_to_register(ce); + banned = context_banned(ce); + init_sched_state(ce); + + if (pending_enable || destroyed || deregister) { + atomic_dec(&guc->outstanding_submission_g2h); + if (deregister) + guc_signal_context_fence(ce); + if (destroyed) { + release_guc_id(guc, ce); + __guc_context_destroy(ce); + } + if (pending_enable|| deregister) + intel_context_put(ce); + } + + /* Not mutualy exclusive with above if statement. */ + if (pending_disable) { + guc_signal_context_fence(ce); + if (banned) { + guc_cancel_context_requests(ce); + intel_engine_signal_breadcrumbs(ce->engine); + } + intel_context_sched_disable_unpin(ce); + atomic_dec(&guc->outstanding_submission_g2h); + spin_lock_irqsave(&ce->guc_state.lock, flags); + guc_blocked_fence_complete(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + intel_context_put(ce); + } } +} + +static inline bool +submission_disabled(struct intel_guc *guc) +{ + struct i915_sched_engine * const sched_engine = guc->sched_engine; - __guc_dequeue(engine); + return unlikely(!sched_engine || + !__tasklet_is_enabled(&sched_engine->tasklet)); +} - i915_sched_engine_reset_on_empty(engine->sched_engine); +static void disable_submission(struct intel_guc *guc) +{ + struct i915_sched_engine * const sched_engine = guc->sched_engine; - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); + if (__tasklet_is_enabled(&sched_engine->tasklet)) { + GEM_BUG_ON(!guc->ct.enabled); + __tasklet_disable_sync_once(&sched_engine->tasklet); + sched_engine->tasklet.callback = NULL; + } } -static void cs_irq_handler(struct intel_engine_cs *engine, u16 iir) +static void enable_submission(struct intel_guc *guc) { - if (iir & GT_RENDER_USER_INTERRUPT) { - intel_engine_signal_breadcrumbs(engine); - tasklet_hi_schedule(&engine->sched_engine->tasklet); + struct i915_sched_engine * const sched_engine = guc->sched_engine; + unsigned long flags; + + spin_lock_irqsave(&guc->sched_engine->lock, flags); + sched_engine->tasklet.callback = guc_submission_tasklet; + wmb(); + if (!__tasklet_is_enabled(&sched_engine->tasklet) && + __tasklet_enable(&sched_engine->tasklet)) { + GEM_BUG_ON(!guc->ct.enabled); + + /* And kick in case we missed a new request submission. */ + tasklet_hi_schedule(&sched_engine->tasklet); } + spin_unlock_irqrestore(&guc->sched_engine->lock, flags); } -static void guc_reset_prepare(struct intel_engine_cs *engine) +static void guc_flush_submissions(struct intel_guc *guc) { - ENGINE_TRACE(engine, "\n"); + struct i915_sched_engine * const sched_engine = guc->sched_engine; + unsigned long flags; + + spin_lock_irqsave(&sched_engine->lock, flags); + spin_unlock_irqrestore(&sched_engine->lock, flags); +} + +void intel_guc_submission_reset_prepare(struct intel_guc *guc) +{ + int i; + + if (unlikely(!guc_submission_initialized(guc))) + /* Reset called during driver load? GuC not yet initialised! */ + return; + + intel_gt_park_heartbeats(guc_to_gt(guc)); + disable_submission(guc); + guc->interrupts.disable(guc); + + /* Flush IRQ handler */ + spin_lock_irq(&guc_to_gt(guc)->irq_lock); + spin_unlock_irq(&guc_to_gt(guc)->irq_lock); + + guc_flush_submissions(guc); /* - * Prevent request submission to the hardware until we have - * completed the reset in i915_gem_reset_finish(). If a request - * is completed by one engine, it may then queue a request - * to a second via its execlists->tasklet *just* as we are - * calling engine->init_hw() and also writing the ELSP. - * Turning off the execlists->tasklet until the reset is over - * prevents the race. - */ - __tasklet_disable_sync_once(&engine->sched_engine->tasklet); + * Handle any outstanding G2Hs before reset. Call IRQ handler directly + * each pass as interrupt have been disabled. We always scrub for + * outstanding G2H as it is possible for outstanding_submission_g2h to + * be incremented after the context state update. + */ + for (i = 0; i < 4 && atomic_read(&guc->outstanding_submission_g2h); ++i) { + intel_guc_to_host_event_handler(guc); +#define wait_for_reset(guc, wait_var) \ + intel_guc_wait_for_pending_msg(guc, wait_var, false, (HZ / 20)) + do { + wait_for_reset(guc, &guc->outstanding_submission_g2h); + } while (!list_empty(&guc->ct.requests.incoming)); + } + scrub_guc_desc_for_outstanding_g2h(guc); +} + +static struct intel_engine_cs * +guc_virtual_get_sibling(struct intel_engine_cs *ve, unsigned int sibling) +{ + struct intel_engine_cs *engine; + intel_engine_mask_t tmp, mask = ve->mask; + unsigned int num_siblings = 0; + + for_each_engine_masked(engine, ve->gt, mask, tmp) + if (num_siblings++ == sibling) + return engine; + + return NULL; +} + +static inline struct intel_engine_cs * +__context_to_physical_engine(struct intel_context *ce) +{ + struct intel_engine_cs *engine = ce->engine; + + if (intel_engine_is_virtual(engine)) + engine = guc_virtual_get_sibling(engine, 0); + + return engine; } -static void guc_reset_state(struct intel_context *ce, - struct intel_engine_cs *engine, - u32 head, - bool scrub) +static void guc_reset_state(struct intel_context *ce, u32 head, bool scrub) { + struct intel_engine_cs *engine = __context_to_physical_engine(ce); + + if (intel_context_is_banned(ce)) + return; + GEM_BUG_ON(!intel_context_is_pinned(ce)); /* @@ -315,69 +769,158 @@ static void guc_reset_state(struct intel_context *ce, lrc_update_regs(ce, engine, head); } -static void guc_reset_rewind(struct intel_engine_cs *engine, bool stalled) +static void guc_reset_nop(struct intel_engine_cs *engine) { - struct intel_engine_execlists * const execlists = &engine->execlists; - struct i915_request *rq; - unsigned long flags; +} - spin_lock_irqsave(&engine->sched_engine->lock, flags); +static void guc_rewind_nop(struct intel_engine_cs *engine, bool stalled) +{ +} - /* Push back any incomplete requests for replay after the reset. */ - rq = execlists_unwind_incomplete_requests(execlists); - if (!rq) - goto out_unlock; +static void +__unwind_incomplete_requests(struct intel_context *ce) +{ + struct i915_request *rq, *rn; + struct list_head *pl; + int prio = I915_PRIORITY_INVALID; + struct i915_sched_engine * const sched_engine = + ce->engine->sched_engine; + unsigned long flags; - if (!i915_request_started(rq)) - stalled = false; + spin_lock_irqsave(&sched_engine->lock, flags); + spin_lock(&ce->guc_active.lock); + list_for_each_entry_safe(rq, rn, + &ce->guc_active.requests, + sched.link) { + if (i915_request_completed(rq)) + continue; + + list_del_init(&rq->sched.link); + spin_unlock(&ce->guc_active.lock); + + __i915_request_unsubmit(rq); + + /* Push the request back into the queue for later resubmission. */ + GEM_BUG_ON(rq_prio(rq) == I915_PRIORITY_INVALID); + if (rq_prio(rq) != prio) { + prio = rq_prio(rq); + pl = i915_sched_lookup_priolist(sched_engine, prio); + } + GEM_BUG_ON(i915_sched_engine_is_empty(sched_engine)); - __i915_request_reset(rq, stalled); - guc_reset_state(rq->context, engine, rq->head, stalled); + list_add_tail(&rq->sched.link, pl); + set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); -out_unlock: - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); + spin_lock(&ce->guc_active.lock); + } + spin_unlock(&ce->guc_active.lock); + spin_unlock_irqrestore(&sched_engine->lock, flags); } -static void guc_reset_cancel(struct intel_engine_cs *engine) +static void __guc_reset_context(struct intel_context *ce, bool stalled) { - struct i915_sched_engine * const sched_engine = engine->sched_engine; - struct i915_request *rq, *rn; - struct rb_node *rb; - unsigned long flags; + struct i915_request *rq; + u32 head; - ENGINE_TRACE(engine, "\n"); + intel_context_get(ce); /* - * Before we call engine->cancel_requests(), we should have exclusive - * access to the submission state. This is arranged for us by the - * caller disabling the interrupt generation, the tasklet and other - * threads that may then access the same state, giving us a free hand - * to reset state. However, we still need to let lockdep be aware that - * we know this state may be accessed in hardirq context, so we - * disable the irq around this manipulation and we want to keep - * the spinlock focused on its duties and not accidentally conflate - * coverage to the submission's irq state. (Similarly, although we - * shouldn't need to disable irq around the manipulation of the - * submission's irq state, we also wish to remind ourselves that - * it is irq state.) + * GuC will implicitly mark the context as non-schedulable + * when it sends the reset notification. Make sure our state + * reflects this change. The context will be marked enabled + * on resubmission. */ - spin_lock_irqsave(&sched_engine->lock, flags); + clr_context_enabled(ce); - /* Mark all executing requests as skipped. */ - list_for_each_entry(rq, &sched_engine->requests, sched.link) { - i915_request_set_error_once(rq, -EIO); - i915_request_mark_complete(rq); + rq = intel_context_find_active_request(ce); + if (!rq) { + head = ce->ring->tail; + stalled = false; + goto out_replay; } - /* Flush the queued requests to the timeline list (for retiring). */ - while ((rb = rb_first_cached(&sched_engine->queue))) { - struct i915_priolist *p = to_priolist(rb); - + if (!i915_request_started(rq)) + stalled = false; + + GEM_BUG_ON(i915_active_is_idle(&ce->active)); + head = intel_ring_wrap(ce->ring, rq->head); + __i915_request_reset(rq, stalled); + +out_replay: + guc_reset_state(ce, head, stalled); + __unwind_incomplete_requests(ce); + intel_context_put(ce); +} + +void intel_guc_submission_reset(struct intel_guc *guc, bool stalled) +{ + struct intel_context *ce; + unsigned long index; + + if (unlikely(!guc_submission_initialized(guc))) + /* Reset called during driver load? GuC not yet initialised! */ + return; + + xa_for_each(&guc->context_lookup, index, ce) + if (intel_context_is_pinned(ce)) + __guc_reset_context(ce, stalled); + + /* GuC is blown away, drop all references to contexts */ + xa_destroy(&guc->context_lookup); +} + +static void guc_cancel_context_requests(struct intel_context *ce) +{ + struct i915_sched_engine *sched_engine = ce_to_guc(ce)->sched_engine; + struct i915_request *rq; + unsigned long flags; + + /* Mark all executing requests as skipped. */ + spin_lock_irqsave(&sched_engine->lock, flags); + spin_lock(&ce->guc_active.lock); + list_for_each_entry(rq, &ce->guc_active.requests, sched.link) + i915_request_put(i915_request_mark_eio(rq)); + spin_unlock(&ce->guc_active.lock); + spin_unlock_irqrestore(&sched_engine->lock, flags); +} + +static void +guc_cancel_sched_engine_requests(struct i915_sched_engine *sched_engine) +{ + struct i915_request *rq, *rn; + struct rb_node *rb; + unsigned long flags; + + /* Can be called during boot if GuC fails to load */ + if (!sched_engine) + return; + + /* + * Before we call engine->cancel_requests(), we should have exclusive + * access to the submission state. This is arranged for us by the + * caller disabling the interrupt generation, the tasklet and other + * threads that may then access the same state, giving us a free hand + * to reset state. However, we still need to let lockdep be aware that + * we know this state may be accessed in hardirq context, so we + * disable the irq around this manipulation and we want to keep + * the spinlock focused on its duties and not accidentally conflate + * coverage to the submission's irq state. (Similarly, although we + * shouldn't need to disable irq around the manipulation of the + * submission's irq state, we also wish to remind ourselves that + * it is irq state.) + */ + spin_lock_irqsave(&sched_engine->lock, flags); + + /* Flush the queued requests to the timeline list (for retiring). */ + while ((rb = rb_first_cached(&sched_engine->queue))) { + struct i915_priolist *p = to_priolist(rb); + priolist_for_each_request_consume(rq, rn, p) { list_del_init(&rq->sched.link); + __i915_request_submit(rq); - dma_fence_set_error(&rq->fence, -EIO); - i915_request_mark_complete(rq); + + i915_request_put(i915_request_mark_eio(rq)); } rb_erase_cached(&p->node, &sched_engine->queue); @@ -389,137 +932,1318 @@ static void guc_reset_cancel(struct intel_engine_cs *engine) sched_engine->queue_priority_hint = INT_MIN; sched_engine->queue = RB_ROOT_CACHED; - spin_unlock_irqrestore(&sched_engine->lock, flags); + spin_unlock_irqrestore(&sched_engine->lock, flags); +} + +void intel_guc_submission_cancel_requests(struct intel_guc *guc) +{ + struct intel_context *ce; + unsigned long index; + + xa_for_each(&guc->context_lookup, index, ce) + if (intel_context_is_pinned(ce)) + guc_cancel_context_requests(ce); + + guc_cancel_sched_engine_requests(guc->sched_engine); + + /* GuC is blown away, drop all references to contexts */ + xa_destroy(&guc->context_lookup); +} + +void intel_guc_submission_reset_finish(struct intel_guc *guc) +{ + /* Reset called during driver load or during wedge? */ + if (unlikely(!guc_submission_initialized(guc) || + test_bit(I915_WEDGED, &guc_to_gt(guc)->reset.flags))) + return; + + /* + * Technically possible for either of these values to be non-zero here, + * but very unlikely + harmless. Regardless let's add a warn so we can + * see in CI if this happens frequently / a precursor to taking down the + * machine. + */ + GEM_WARN_ON(atomic_read(&guc->outstanding_submission_g2h)); + atomic_set(&guc->outstanding_submission_g2h, 0); + + intel_guc_global_policies_update(guc); + enable_submission(guc); + intel_gt_unpark_heartbeats(guc_to_gt(guc)); +} + +/* + * Set up the memory resources to be shared with the GuC (via the GGTT) + * at firmware loading time. + */ +int intel_guc_submission_init(struct intel_guc *guc) +{ + int ret; + + if (guc->lrc_desc_pool) + return 0; + + ret = guc_lrc_desc_pool_create(guc); + if (ret) + return ret; + /* + * Keep static analysers happy, let them know that we allocated the + * vma after testing that it didn't exist earlier. + */ + GEM_BUG_ON(!guc->lrc_desc_pool); + + xa_init_flags(&guc->context_lookup, XA_FLAGS_LOCK_IRQ); + + spin_lock_init(&guc->contexts_lock); + INIT_LIST_HEAD(&guc->guc_id_list); + ida_init(&guc->guc_ids); + + return 0; +} + +void intel_guc_submission_fini(struct intel_guc *guc) +{ + if (!guc->lrc_desc_pool) + return; + + guc_lrc_desc_pool_destroy(guc); + i915_sched_engine_put(guc->sched_engine); +} + +static inline void queue_request(struct i915_sched_engine *sched_engine, + struct i915_request *rq, + int prio) +{ + GEM_BUG_ON(!list_empty(&rq->sched.link)); + list_add_tail(&rq->sched.link, + i915_sched_lookup_priolist(sched_engine, prio)); + set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); +} + +static int guc_bypass_tasklet_submit(struct intel_guc *guc, + struct i915_request *rq) +{ + int ret; + + __i915_request_submit(rq); + + trace_i915_request_in(rq, 0); + + guc_set_lrc_tail(rq); + ret = guc_add_request(guc, rq); + if (ret == -EBUSY) + guc->stalled_request = rq; + + if (unlikely(ret == -EPIPE)) + disable_submission(guc); + + return ret; +} + +static void guc_submit_request(struct i915_request *rq) +{ + struct i915_sched_engine *sched_engine = rq->engine->sched_engine; + struct intel_guc *guc = &rq->engine->gt->uc.guc; + unsigned long flags; + + /* Will be called from irq-context when using foreign fences. */ + spin_lock_irqsave(&sched_engine->lock, flags); + + if (submission_disabled(guc) || guc->stalled_request || + !i915_sched_engine_is_empty(sched_engine)) + queue_request(sched_engine, rq, rq_prio(rq)); + else if (guc_bypass_tasklet_submit(guc, rq) == -EBUSY) + tasklet_hi_schedule(&sched_engine->tasklet); + + spin_unlock_irqrestore(&sched_engine->lock, flags); +} + +static int new_guc_id(struct intel_guc *guc) +{ + return ida_simple_get(&guc->guc_ids, 0, + GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL | + __GFP_RETRY_MAYFAIL | __GFP_NOWARN); +} + +static void __release_guc_id(struct intel_guc *guc, struct intel_context *ce) +{ + if (!context_guc_id_invalid(ce)) { + ida_simple_remove(&guc->guc_ids, ce->guc_id); + reset_lrc_desc(guc, ce->guc_id); + set_context_guc_id_invalid(ce); + } + if (!list_empty(&ce->guc_id_link)) + list_del_init(&ce->guc_id_link); +} + +static void release_guc_id(struct intel_guc *guc, struct intel_context *ce) +{ + unsigned long flags; + + spin_lock_irqsave(&guc->contexts_lock, flags); + __release_guc_id(guc, ce); + spin_unlock_irqrestore(&guc->contexts_lock, flags); +} + +static int steal_guc_id(struct intel_guc *guc) +{ + struct intel_context *ce; + int guc_id; + + lockdep_assert_held(&guc->contexts_lock); + + if (!list_empty(&guc->guc_id_list)) { + ce = list_first_entry(&guc->guc_id_list, + struct intel_context, + guc_id_link); + + GEM_BUG_ON(atomic_read(&ce->guc_id_ref)); + GEM_BUG_ON(context_guc_id_invalid(ce)); + + list_del_init(&ce->guc_id_link); + guc_id = ce->guc_id; + clr_context_registered(ce); + set_context_guc_id_invalid(ce); + return guc_id; + } else { + return -EAGAIN; + } +} + +static int assign_guc_id(struct intel_guc *guc, u16 *out) +{ + int ret; + + lockdep_assert_held(&guc->contexts_lock); + + ret = new_guc_id(guc); + if (unlikely(ret < 0)) { + ret = steal_guc_id(guc); + if (ret < 0) + return ret; + } + + *out = ret; + return 0; +} + +#define PIN_GUC_ID_TRIES 4 +static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce) +{ + int ret = 0; + unsigned long flags, tries = PIN_GUC_ID_TRIES; + + GEM_BUG_ON(atomic_read(&ce->guc_id_ref)); + +try_again: + spin_lock_irqsave(&guc->contexts_lock, flags); + + if (context_guc_id_invalid(ce)) { + ret = assign_guc_id(guc, &ce->guc_id); + if (ret) + goto out_unlock; + ret = 1; /* Indidcates newly assigned guc_id */ + } + if (!list_empty(&ce->guc_id_link)) + list_del_init(&ce->guc_id_link); + atomic_inc(&ce->guc_id_ref); + +out_unlock: + spin_unlock_irqrestore(&guc->contexts_lock, flags); + + /* + * -EAGAIN indicates no guc_ids are available, let's retire any + * outstanding requests to see if that frees up a guc_id. If the first + * retire didn't help, insert a sleep with the timeslice duration before + * attempting to retire more requests. Double the sleep period each + * subsequent pass before finally giving up. The sleep period has max of + * 100ms and minimum of 1ms. + */ + if (ret == -EAGAIN && --tries) { + if (PIN_GUC_ID_TRIES - tries > 1) { + unsigned int timeslice_shifted = + ce->engine->props.timeslice_duration_ms << + (PIN_GUC_ID_TRIES - tries - 2); + unsigned int max = min_t(unsigned int, 100, + timeslice_shifted); + + msleep(max_t(unsigned int, max, 1)); + } + intel_gt_retire_requests(guc_to_gt(guc)); + goto try_again; + } + + return ret; +} + +static void unpin_guc_id(struct intel_guc *guc, struct intel_context *ce) +{ + unsigned long flags; + + GEM_BUG_ON(atomic_read(&ce->guc_id_ref) < 0); + + spin_lock_irqsave(&guc->contexts_lock, flags); + if (!context_guc_id_invalid(ce) && list_empty(&ce->guc_id_link) && + !atomic_read(&ce->guc_id_ref)) + list_add_tail(&ce->guc_id_link, &guc->guc_id_list); + spin_unlock_irqrestore(&guc->contexts_lock, flags); +} + +static int __guc_action_register_context(struct intel_guc *guc, + u32 guc_id, + u32 offset, + bool loop) +{ + u32 action[] = { + INTEL_GUC_ACTION_REGISTER_CONTEXT, + guc_id, + offset, + }; + + return guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), + 0, loop); +} + +static int register_context(struct intel_context *ce, bool loop) +{ + struct intel_guc *guc = ce_to_guc(ce); + u32 offset = intel_guc_ggtt_offset(guc, guc->lrc_desc_pool) + + ce->guc_id * sizeof(struct guc_lrc_desc); + int ret; + + trace_intel_context_register(ce); + + ret = __guc_action_register_context(guc, ce->guc_id, offset, loop); + set_context_registered(ce); + return ret; +} + +static int __guc_action_deregister_context(struct intel_guc *guc, + u32 guc_id, + bool loop) +{ + u32 action[] = { + INTEL_GUC_ACTION_DEREGISTER_CONTEXT, + guc_id, + }; + + return guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), + G2H_LEN_DW_DEREGISTER_CONTEXT, + loop); +} + +static int deregister_context(struct intel_context *ce, u32 guc_id, bool loop) +{ + struct intel_guc *guc = ce_to_guc(ce); + + trace_intel_context_deregister(ce); + + return __guc_action_deregister_context(guc, guc_id, loop); +} + +static intel_engine_mask_t adjust_engine_mask(u8 class, intel_engine_mask_t mask) +{ + switch (class) { + case RENDER_CLASS: + return mask >> RCS0; + case VIDEO_ENHANCEMENT_CLASS: + return mask >> VECS0; + case VIDEO_DECODE_CLASS: + return mask >> VCS0; + case COPY_ENGINE_CLASS: + return mask >> BCS0; + default: + MISSING_CASE(class); + return 0; + } +} + +static void guc_context_policy_init(struct intel_engine_cs *engine, + struct guc_lrc_desc *desc) +{ + desc->policy_flags = 0; + + if (engine->flags & I915_ENGINE_WANT_FORCED_PREEMPTION) + desc->policy_flags |= CONTEXT_POLICY_FLAG_PREEMPT_TO_IDLE; + + /* NB: For both of these, zero means disabled. */ + desc->execution_quantum = engine->props.timeslice_duration_ms * 1000; + desc->preemption_timeout = engine->props.preempt_timeout_ms * 1000; +} + +static inline u8 map_i915_prio_to_guc_prio(int prio); + +static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) +{ + struct intel_engine_cs *engine = ce->engine; + struct intel_runtime_pm *runtime_pm = engine->uncore->rpm; + struct intel_guc *guc = &engine->gt->uc.guc; + u32 desc_idx = ce->guc_id; + struct guc_lrc_desc *desc; + const struct i915_gem_context *ctx; + int prio = I915_CONTEXT_DEFAULT_PRIORITY; + bool context_registered; + intel_wakeref_t wakeref; + int ret = 0; + + GEM_BUG_ON(!engine->mask); + + /* + * Ensure LRC + CT vmas are is same region as write barrier is done + * based on CT vma region. + */ + GEM_BUG_ON(i915_gem_object_is_lmem(guc->ct.vma->obj) != + i915_gem_object_is_lmem(ce->ring->vma->obj)); + + context_registered = lrc_desc_registered(guc, desc_idx); + + rcu_read_lock(); + ctx = rcu_dereference(ce->gem_context); + if (ctx) + prio = ctx->sched.priority; + rcu_read_unlock(); + + reset_lrc_desc(guc, desc_idx); + set_lrc_desc_registered(guc, desc_idx, ce); + + desc = __get_lrc_desc(guc, desc_idx); + desc->engine_class = engine_class_to_guc_class(engine->class); + desc->engine_submit_mask = adjust_engine_mask(engine->class, + engine->mask); + desc->hw_context_desc = ce->lrc.lrca; + ce->guc_prio = map_i915_prio_to_guc_prio(prio); + desc->priority = ce->guc_prio; + desc->context_flags = CONTEXT_REGISTRATION_FLAG_KMD; + guc_context_policy_init(engine, desc); + init_sched_state(ce); + + /* + * The context_lookup xarray is used to determine if the hardware + * context is currently registered. There are two cases in which it + * could be registered either the guc_id has been stolen from from + * another context or the lrc descriptor address of this context has + * changed. In either case the context needs to be deregistered with the + * GuC before registering this context. + */ + if (context_registered) { + trace_intel_context_steal_guc_id(ce); + if (!loop) { + set_context_wait_for_deregister_to_register(ce); + intel_context_get(ce); + } else { + bool disabled; + unsigned long flags; + + /* Seal race with Reset */ + spin_lock_irqsave(&ce->guc_state.lock, flags); + disabled = submission_disabled(guc); + if (likely(!disabled)) { + set_context_wait_for_deregister_to_register(ce); + intel_context_get(ce); + } + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + if (unlikely(disabled)) { + reset_lrc_desc(guc, desc_idx); + return 0; /* Will get registered later */ + } + } + + /* + * If stealing the guc_id, this ce has the same guc_id as the + * context whose guc_id was stolen. + */ + with_intel_runtime_pm(runtime_pm, wakeref) + ret = deregister_context(ce, ce->guc_id, loop); + if (unlikely(ret == -EBUSY)) { + clr_context_wait_for_deregister_to_register(ce); + intel_context_put(ce); + } else if (unlikely(ret == -ENODEV)) + ret = 0; /* Will get registered later */ + } else { + with_intel_runtime_pm(runtime_pm, wakeref) + ret = register_context(ce, loop); + if (unlikely(ret == -EBUSY)) + reset_lrc_desc(guc, desc_idx); + else if (unlikely(ret == -ENODEV)) + ret = 0; /* Will get registered later */ + } + + return ret; +} + +static int __guc_context_pre_pin(struct intel_context *ce, + struct intel_engine_cs *engine, + struct i915_gem_ww_ctx *ww, + void **vaddr) +{ + return lrc_pre_pin(ce, engine, ww, vaddr); +} + +static int __guc_context_pin(struct intel_context *ce, + struct intel_engine_cs *engine, + void *vaddr) +{ + if (i915_ggtt_offset(ce->state) != + (ce->lrc.lrca & CTX_GTT_ADDRESS_MASK)) + set_bit(CONTEXT_LRCA_DIRTY, &ce->flags); + + /* + * GuC context gets pinned in guc_request_alloc. See that function for + * explaination of why. + */ + + return lrc_pin(ce, engine, vaddr); +} + +static int guc_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww, + void **vaddr) +{ + return __guc_context_pre_pin(ce, ce->engine, ww, vaddr); +} + +static int guc_context_pin(struct intel_context *ce, void *vaddr) +{ + return __guc_context_pin(ce, ce->engine, vaddr); +} + +static void guc_context_unpin(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + + unpin_guc_id(guc, ce); + lrc_unpin(ce); +} + +static void guc_context_post_unpin(struct intel_context *ce) +{ + lrc_post_unpin(ce); +} + +static void __guc_context_sched_enable(struct intel_guc *guc, + struct intel_context *ce) +{ + u32 action[] = { + INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET, + ce->guc_id, + GUC_CONTEXT_ENABLE + }; + + trace_intel_context_sched_enable(ce); + + guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), + G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, true); +} + +static void __guc_context_sched_disable(struct intel_guc *guc, + struct intel_context *ce, + u16 guc_id) +{ + u32 action[] = { + INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET, + guc_id, /* ce->guc_id not stable */ + GUC_CONTEXT_DISABLE + }; + + GEM_BUG_ON(guc_id == GUC_INVALID_LRC_ID); + + trace_intel_context_sched_disable(ce); + + guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), + G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, true); +} + +static void guc_blocked_fence_complete(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + + if (!i915_sw_fence_done(&ce->guc_blocked)) + i915_sw_fence_complete(&ce->guc_blocked); +} + +static void guc_blocked_fence_reinit(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + GEM_BUG_ON(!i915_sw_fence_done(&ce->guc_blocked)); + i915_sw_fence_fini(&ce->guc_blocked); + i915_sw_fence_reinit(&ce->guc_blocked); + i915_sw_fence_await(&ce->guc_blocked); + i915_sw_fence_commit(&ce->guc_blocked); +} + +static u16 prep_context_pending_disable(struct intel_context *ce) +{ + lockdep_assert_held(&ce->guc_state.lock); + + set_context_pending_disable(ce); + clr_context_enabled(ce); + guc_blocked_fence_reinit(ce); + intel_context_get(ce); + + return ce->guc_id; +} + +static struct i915_sw_fence *guc_context_block(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + struct i915_sched_engine *sched_engine = ce->engine->sched_engine; + unsigned long flags; + struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm; + intel_wakeref_t wakeref; + u16 guc_id; + bool enabled; + + spin_lock_irqsave(&sched_engine->lock, flags); + incr_context_blocked(ce); + spin_unlock_irqrestore(&sched_engine->lock, flags); + + spin_lock_irqsave(&ce->guc_state.lock, flags); + enabled = context_enabled(ce); + if (unlikely(!enabled || submission_disabled(guc))) { + if (enabled) + clr_context_enabled(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + return &ce->guc_blocked; + } + + /* + * We add +2 here as the schedule disable complete CTB handler calls + * intel_context_sched_disable_unpin (-2 to pin_count). + */ + atomic_add(2, &ce->pin_count); + + guc_id = prep_context_pending_disable(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + with_intel_runtime_pm(runtime_pm, wakeref) + __guc_context_sched_disable(guc, ce, guc_id); + + return &ce->guc_blocked; +} + +static void guc_context_unblock(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + struct i915_sched_engine *sched_engine = ce->engine->sched_engine; + unsigned long flags; + struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm; + intel_wakeref_t wakeref; + + GEM_BUG_ON(context_enabled(ce)); + + if (unlikely(context_blocked(ce) > 1)) { + spin_lock_irqsave(&sched_engine->lock, flags); + if (likely(context_blocked(ce) > 1)) + goto decrement; + spin_unlock_irqrestore(&sched_engine->lock, flags); + } + + spin_lock_irqsave(&ce->guc_state.lock, flags); + if (unlikely(submission_disabled(guc) || + !intel_context_is_pinned(ce) || + context_pending_disable(ce) || + context_blocked(ce) > 1)) { + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + goto out; + } + + set_context_pending_enable(ce); + set_context_enabled(ce); + intel_context_get(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + with_intel_runtime_pm(runtime_pm, wakeref) + __guc_context_sched_enable(guc, ce); + +out: + spin_lock_irqsave(&sched_engine->lock, flags); +decrement: + decr_context_blocked(ce); + spin_unlock_irqrestore(&sched_engine->lock, flags); +} + +static void guc_context_cancel_request(struct intel_context *ce, + struct i915_request *rq) +{ + if (i915_sw_fence_signaled(&rq->submit)) { + struct i915_sw_fence *fence = guc_context_block(ce); + + i915_sw_fence_wait(fence); + if (!i915_request_completed(rq)) { + __i915_request_skip(rq); + guc_reset_state(ce, intel_ring_wrap(ce->ring, rq->head), + true); + } + guc_context_unblock(ce); + } +} + +static void __guc_context_set_preemption_timeout(struct intel_guc *guc, + u16 guc_id, + u32 preemption_timeout) +{ + u32 action [] = { + INTEL_GUC_ACTION_SET_CONTEXT_PREEMPTION_TIMEOUT, + guc_id, + preemption_timeout + }; + + intel_guc_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true); +} + +static void guc_context_ban(struct intel_context *ce, struct i915_request *rq) +{ + struct intel_guc *guc = ce_to_guc(ce); + struct intel_runtime_pm *runtime_pm = + &ce->engine->gt->i915->runtime_pm; + intel_wakeref_t wakeref; + unsigned long flags; + + guc_flush_submissions(guc); + + spin_lock_irqsave(&ce->guc_state.lock, flags); + set_context_banned(ce); + + if (submission_disabled(guc) || (!context_enabled(ce) && + !context_pending_disable(ce))) { + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + guc_cancel_context_requests(ce); + intel_engine_signal_breadcrumbs(ce->engine); + } else if (!context_pending_disable(ce)) { + u16 guc_id; + + /* + * We add +2 here as the schedule disable complete CTB handler + * calls intel_context_sched_disable_unpin (-2 to pin_count). + */ + atomic_add(2, &ce->pin_count); + + guc_id = prep_context_pending_disable(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + /* + * In addition to disabling scheduling, set the preemption + * timeout to the minimum value (1 us) so the banned context + * gets kicked off the HW ASAP. + */ + with_intel_runtime_pm(runtime_pm, wakeref) { + __guc_context_set_preemption_timeout(guc, guc_id, 1); + __guc_context_sched_disable(guc, ce, guc_id); + } + } else { + if (!context_guc_id_invalid(ce)) + with_intel_runtime_pm(runtime_pm, wakeref) + __guc_context_set_preemption_timeout(guc, + ce->guc_id, + 1); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + } +} + +static void guc_context_sched_disable(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + unsigned long flags; + struct intel_runtime_pm *runtime_pm = &ce->engine->gt->i915->runtime_pm; + intel_wakeref_t wakeref; + u16 guc_id; + bool enabled; + + if (submission_disabled(guc) || context_guc_id_invalid(ce) || + !lrc_desc_registered(guc, ce->guc_id)) { + clr_context_enabled(ce); + goto unpin; + } + + if (!context_enabled(ce)) + goto unpin; + + spin_lock_irqsave(&ce->guc_state.lock, flags); + + /* + * We have to check if the context has been disabled by another thread. + * We also have to check if the context has been pinned again as another + * pin operation is allowed to pass this function. Checking the pin + * count, within ce->guc_state.lock, synchronizes this function with + * guc_request_alloc ensuring a request doesn't slip through the + * 'context_pending_disable' fence. Checking within the spin lock (can't + * sleep) ensures another process doesn't pin this context and generate + * a request before we set the 'context_pending_disable' flag here. + */ + enabled = context_enabled(ce); + if (unlikely(!enabled || submission_disabled(guc))) { + if (enabled) + clr_context_enabled(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + goto unpin; + } + if (unlikely(atomic_add_unless(&ce->pin_count, -2, 2))) { + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + return; + } + guc_id = prep_context_pending_disable(ce); + + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + with_intel_runtime_pm(runtime_pm, wakeref) + __guc_context_sched_disable(guc, ce, guc_id); + + return; +unpin: + intel_context_sched_disable_unpin(ce); +} + +static inline void guc_lrc_desc_unpin(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + + GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id)); + GEM_BUG_ON(ce != __get_context(guc, ce->guc_id)); + GEM_BUG_ON(context_enabled(ce)); + + clr_context_registered(ce); + deregister_context(ce, ce->guc_id, true); +} + +static void __guc_context_destroy(struct intel_context *ce) +{ + GEM_BUG_ON(ce->guc_prio_count[GUC_CLIENT_PRIORITY_KMD_HIGH] || + ce->guc_prio_count[GUC_CLIENT_PRIORITY_HIGH] || + ce->guc_prio_count[GUC_CLIENT_PRIORITY_KMD_NORMAL] || + ce->guc_prio_count[GUC_CLIENT_PRIORITY_NORMAL]); + + lrc_fini(ce); + intel_context_fini(ce); + + if (intel_engine_is_virtual(ce->engine)) { + struct guc_virtual_engine *ve = + container_of(ce, typeof(*ve), context); + + if (ve->base.breadcrumbs) + intel_breadcrumbs_put(ve->base.breadcrumbs); + + kfree(ve); + } else { + intel_context_free(ce); + } +} + +static void guc_context_destroy(struct kref *kref) +{ + struct intel_context *ce = container_of(kref, typeof(*ce), ref); + struct intel_runtime_pm *runtime_pm = ce->engine->uncore->rpm; + struct intel_guc *guc = ce_to_guc(ce); + intel_wakeref_t wakeref; + unsigned long flags; + bool disabled; + + /* + * If the guc_id is invalid this context has been stolen and we can free + * it immediately. Also can be freed immediately if the context is not + * registered with the GuC or the GuC is in the middle of a reset. + */ + if (context_guc_id_invalid(ce)) { + __guc_context_destroy(ce); + return; + } else if (submission_disabled(guc) || + !lrc_desc_registered(guc, ce->guc_id)) { + release_guc_id(guc, ce); + __guc_context_destroy(ce); + return; + } + + /* + * We have to acquire the context spinlock and check guc_id again, if it + * is valid it hasn't been stolen and needs to be deregistered. We + * delete this context from the list of unpinned guc_ids available to + * steal to seal a race with guc_lrc_desc_pin(). When the G2H CTB + * returns indicating this context has been deregistered the guc_id is + * returned to the pool of available guc_ids. + */ + spin_lock_irqsave(&guc->contexts_lock, flags); + if (context_guc_id_invalid(ce)) { + spin_unlock_irqrestore(&guc->contexts_lock, flags); + __guc_context_destroy(ce); + return; + } + + if (!list_empty(&ce->guc_id_link)) + list_del_init(&ce->guc_id_link); + spin_unlock_irqrestore(&guc->contexts_lock, flags); + + /* Seal race with Reset */ + spin_lock_irqsave(&ce->guc_state.lock, flags); + disabled = submission_disabled(guc); + if (likely(!disabled)) + set_context_destroyed(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + if (unlikely(disabled)) { + release_guc_id(guc, ce); + __guc_context_destroy(ce); + return; + } + + /* + * We defer GuC context deregistration until the context is destroyed + * in order to save on CTBs. With this optimization ideally we only need + * 1 CTB to register the context during the first pin and 1 CTB to + * deregister the context when the context is destroyed. Without this + * optimization, a CTB would be needed every pin & unpin. + * + * XXX: Need to acqiure the runtime wakeref as this can be triggered + * from context_free_worker when runtime wakeref is not held. + * guc_lrc_desc_unpin requires the runtime as a GuC register is written + * in H2G CTB to deregister the context. A future patch may defer this + * H2G CTB if the runtime wakeref is zero. + */ + with_intel_runtime_pm(runtime_pm, wakeref) + guc_lrc_desc_unpin(ce); +} + +static int guc_context_alloc(struct intel_context *ce) +{ + return lrc_alloc(ce, ce->engine); +} + +static void guc_context_set_prio(struct intel_guc *guc, + struct intel_context *ce, + u8 prio) +{ + u32 action[] = { + INTEL_GUC_ACTION_SET_CONTEXT_PRIORITY, + ce->guc_id, + prio, + }; + + GEM_BUG_ON(prio < GUC_CLIENT_PRIORITY_KMD_HIGH || + prio > GUC_CLIENT_PRIORITY_NORMAL); + + if (ce->guc_prio == prio || submission_disabled(guc) || + !context_registered(ce)) + return; + + guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), 0, true); + + ce->guc_prio = prio; + trace_intel_context_set_prio(ce); +} + +static inline u8 map_i915_prio_to_guc_prio(int prio) +{ + if (prio == I915_PRIORITY_NORMAL) + return GUC_CLIENT_PRIORITY_KMD_NORMAL; + else if (prio < I915_PRIORITY_NORMAL) + return GUC_CLIENT_PRIORITY_NORMAL; + else if (prio != I915_PRIORITY_BARRIER) + return GUC_CLIENT_PRIORITY_HIGH; + else + return GUC_CLIENT_PRIORITY_KMD_HIGH; +} + +static inline void add_context_inflight_prio(struct intel_context *ce, + u8 guc_prio) +{ + lockdep_assert_held(&ce->guc_active.lock); + GEM_BUG_ON(guc_prio >= ARRAY_SIZE(ce->guc_prio_count)); + + ++ce->guc_prio_count[guc_prio]; + + /* Overflow protection */ + GEM_WARN_ON(!ce->guc_prio_count[guc_prio]); +} + +static inline void sub_context_inflight_prio(struct intel_context *ce, + u8 guc_prio) +{ + lockdep_assert_held(&ce->guc_active.lock); + GEM_BUG_ON(guc_prio >= ARRAY_SIZE(ce->guc_prio_count)); + + /* Underflow protection */ + GEM_WARN_ON(!ce->guc_prio_count[guc_prio]); + + --ce->guc_prio_count[guc_prio]; +} + +static inline void update_context_prio(struct intel_context *ce) +{ + struct intel_guc *guc = &ce->engine->gt->uc.guc; + int i; + + lockdep_assert_held(&ce->guc_active.lock); + + for (i = 0; i < ARRAY_SIZE(ce->guc_prio_count); ++i) { + if (ce->guc_prio_count[i]) { + guc_context_set_prio(guc, ce, i); + break; + } + } +} + +static inline bool new_guc_prio_higher(u8 old_guc_prio, u8 new_guc_prio) +{ + /* Lower value is higher priority */ + return new_guc_prio < old_guc_prio; +} + +static void add_to_context(struct i915_request *rq) +{ + struct intel_context *ce = rq->context; + u8 new_guc_prio = map_i915_prio_to_guc_prio(rq_prio(rq)); + + GEM_BUG_ON(rq->guc_prio == GUC_PRIO_FINI); + + spin_lock(&ce->guc_active.lock); + list_move_tail(&rq->sched.link, &ce->guc_active.requests); + + if (rq->guc_prio == GUC_PRIO_INIT) { + rq->guc_prio = new_guc_prio; + add_context_inflight_prio(ce, rq->guc_prio); + } else if (new_guc_prio_higher(rq->guc_prio, new_guc_prio)) { + sub_context_inflight_prio(ce, rq->guc_prio); + rq->guc_prio = new_guc_prio; + add_context_inflight_prio(ce, rq->guc_prio); + } + update_context_prio(ce); + + spin_unlock(&ce->guc_active.lock); +} + +static void guc_prio_fini(struct i915_request *rq, struct intel_context *ce) +{ + if (rq->guc_prio != GUC_PRIO_INIT && + rq->guc_prio != GUC_PRIO_FINI) { + sub_context_inflight_prio(ce, rq->guc_prio); + update_context_prio(ce); + } + rq->guc_prio = GUC_PRIO_FINI; +} + +static void remove_from_context(struct i915_request *rq) +{ + struct intel_context *ce = rq->context; + + spin_lock_irq(&ce->guc_active.lock); + + list_del_init(&rq->sched.link); + clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); + + /* Prevent further __await_execution() registering a cb, then flush */ + set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags); + + guc_prio_fini(rq, ce); + + spin_unlock_irq(&ce->guc_active.lock); + + atomic_dec(&ce->guc_id_ref); + i915_request_notify_execute_cb_imm(rq); +} + +static const struct intel_context_ops guc_context_ops = { + .alloc = guc_context_alloc, + + .pre_pin = guc_context_pre_pin, + .pin = guc_context_pin, + .unpin = guc_context_unpin, + .post_unpin = guc_context_post_unpin, + + .ban = guc_context_ban, + + .cancel_request = guc_context_cancel_request, + + .enter = intel_context_enter_engine, + .exit = intel_context_exit_engine, + + .sched_disable = guc_context_sched_disable, + + .reset = lrc_reset, + .destroy = guc_context_destroy, + + .create_virtual = guc_create_virtual, +}; + +static void __guc_signal_context_fence(struct intel_context *ce) +{ + struct i915_request *rq; + + lockdep_assert_held(&ce->guc_state.lock); + + if (!list_empty(&ce->guc_state.fences)) + trace_intel_context_fence_release(ce); + + list_for_each_entry(rq, &ce->guc_state.fences, guc_fence_link) + i915_sw_fence_complete(&rq->submit); + + INIT_LIST_HEAD(&ce->guc_state.fences); +} + +static void guc_signal_context_fence(struct intel_context *ce) +{ + unsigned long flags; + + spin_lock_irqsave(&ce->guc_state.lock, flags); + clr_context_wait_for_deregister_to_register(ce); + __guc_signal_context_fence(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); +} + +static bool context_needs_register(struct intel_context *ce, bool new_guc_id) +{ + return (new_guc_id || test_bit(CONTEXT_LRCA_DIRTY, &ce->flags) || + !lrc_desc_registered(ce_to_guc(ce), ce->guc_id)) && + !submission_disabled(ce_to_guc(ce)); +} + +static int guc_request_alloc(struct i915_request *rq) +{ + struct intel_context *ce = rq->context; + struct intel_guc *guc = ce_to_guc(ce); + unsigned long flags; + int ret; + + GEM_BUG_ON(!intel_context_is_pinned(rq->context)); + + /* + * Flush enough space to reduce the likelihood of waiting after + * we start building the request - in which case we will just + * have to repeat work. + */ + rq->reserved_space += GUC_REQUEST_SIZE; + + /* + * Note that after this point, we have committed to using + * this request as it is being used to both track the + * state of engine initialisation and liveness of the + * golden renderstate above. Think twice before you try + * to cancel/unwind this request now. + */ + + /* Unconditionally invalidate GPU caches and TLBs. */ + ret = rq->engine->emit_flush(rq, EMIT_INVALIDATE); + if (ret) + return ret; + + rq->reserved_space -= GUC_REQUEST_SIZE; + + /* + * Call pin_guc_id here rather than in the pinning step as with + * dma_resv, contexts can be repeatedly pinned / unpinned trashing the + * guc_ids and creating horrible race conditions. This is especially bad + * when guc_ids are being stolen due to over subscription. By the time + * this function is reached, it is guaranteed that the guc_id will be + * persistent until the generated request is retired. Thus, sealing these + * race conditions. It is still safe to fail here if guc_ids are + * exhausted and return -EAGAIN to the user indicating that they can try + * again in the future. + * + * There is no need for a lock here as the timeline mutex ensures at + * most one context can be executing this code path at once. The + * guc_id_ref is incremented once for every request in flight and + * decremented on each retire. When it is zero, a lock around the + * increment (in pin_guc_id) is needed to seal a race with unpin_guc_id. + */ + if (atomic_add_unless(&ce->guc_id_ref, 1, 0)) + goto out; + + ret = pin_guc_id(guc, ce); /* returns 1 if new guc_id assigned */ + if (unlikely(ret < 0)) + return ret; + if (context_needs_register(ce, !!ret)) { + ret = guc_lrc_desc_pin(ce, true); + if (unlikely(ret)) { /* unwind */ + if (ret == -EPIPE) { + disable_submission(guc); + goto out; /* GPU will be reset */ + } + atomic_dec(&ce->guc_id_ref); + unpin_guc_id(guc, ce); + return ret; + } + } + + clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); + +out: + /* + * We block all requests on this context if a G2H is pending for a + * schedule disable or context deregistration as the GuC will fail a + * schedule enable or context registration if either G2H is pending + * respectfully. Once a G2H returns, the fence is released that is + * blocking these requests (see guc_signal_context_fence). + * + * We can safely check the below fields outside of the lock as it isn't + * possible for these fields to transition from being clear to set but + * converse is possible, hence the need for the check within the lock. + */ + if (likely(!context_wait_for_deregister_to_register(ce) && + !context_pending_disable(ce))) + return 0; + + spin_lock_irqsave(&ce->guc_state.lock, flags); + if (context_wait_for_deregister_to_register(ce) || + context_pending_disable(ce)) { + i915_sw_fence_await(&rq->submit); + + list_add_tail(&rq->guc_fence_link, &ce->guc_state.fences); + } + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + return 0; } -static void guc_reset_finish(struct intel_engine_cs *engine) +static int guc_virtual_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww, + void **vaddr) { - if (__tasklet_enable(&engine->sched_engine->tasklet)) - /* And kick in case we missed a new request submission. */ - tasklet_hi_schedule(&engine->sched_engine->tasklet); + struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0); - ENGINE_TRACE(engine, "depth->%d\n", - atomic_read(&engine->sched_engine->tasklet.count)); + return __guc_context_pre_pin(ce, engine, ww, vaddr); } -/* - * Set up the memory resources to be shared with the GuC (via the GGTT) - * at firmware loading time. - */ -int intel_guc_submission_init(struct intel_guc *guc) +static int guc_virtual_context_pin(struct intel_context *ce, void *vaddr) { - int ret; - - if (guc->stage_desc_pool) - return 0; - - ret = guc_stage_desc_pool_create(guc); - if (ret) - return ret; - /* - * Keep static analysers happy, let them know that we allocated the - * vma after testing that it didn't exist earlier. - */ - GEM_BUG_ON(!guc->stage_desc_pool); + struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0); - return 0; + return __guc_context_pin(ce, engine, vaddr); } -void intel_guc_submission_fini(struct intel_guc *guc) +static void guc_virtual_context_enter(struct intel_context *ce) { - if (guc->stage_desc_pool) { - guc_stage_desc_pool_destroy(guc); - } -} + intel_engine_mask_t tmp, mask = ce->engine->mask; + struct intel_engine_cs *engine; -static int guc_context_alloc(struct intel_context *ce) -{ - return lrc_alloc(ce, ce->engine); + for_each_engine_masked(engine, ce->engine->gt, mask, tmp) + intel_engine_pm_get(engine); + + intel_timeline_enter(ce->timeline); } -static int guc_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, - void **vaddr) +static void guc_virtual_context_exit(struct intel_context *ce) { - return lrc_pre_pin(ce, ce->engine, ww, vaddr); + intel_engine_mask_t tmp, mask = ce->engine->mask; + struct intel_engine_cs *engine; + + for_each_engine_masked(engine, ce->engine->gt, mask, tmp) + intel_engine_pm_put(engine); + + intel_timeline_exit(ce->timeline); } -static int guc_context_pin(struct intel_context *ce, void *vaddr) +static int guc_virtual_context_alloc(struct intel_context *ce) { - return lrc_pin(ce, ce->engine, vaddr); + struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0); + + return lrc_alloc(ce, engine); } -static const struct intel_context_ops guc_context_ops = { - .alloc = guc_context_alloc, +static const struct intel_context_ops virtual_guc_context_ops = { + .alloc = guc_virtual_context_alloc, - .pre_pin = guc_context_pre_pin, - .pin = guc_context_pin, - .unpin = lrc_unpin, - .post_unpin = lrc_post_unpin, + .pre_pin = guc_virtual_context_pre_pin, + .pin = guc_virtual_context_pin, + .unpin = guc_context_unpin, + .post_unpin = guc_context_post_unpin, - .enter = intel_context_enter_engine, - .exit = intel_context_exit_engine, + .ban = guc_context_ban, - .reset = lrc_reset, - .destroy = lrc_destroy, -}; + .cancel_request = guc_context_cancel_request, -static int guc_request_alloc(struct i915_request *request) -{ - int ret; + .enter = guc_virtual_context_enter, + .exit = guc_virtual_context_exit, - GEM_BUG_ON(!intel_context_is_pinned(request->context)); + .sched_disable = guc_context_sched_disable, - /* - * Flush enough space to reduce the likelihood of waiting after - * we start building the request - in which case we will just - * have to repeat work. - */ - request->reserved_space += GUC_REQUEST_SIZE; + .destroy = guc_context_destroy, - /* - * Note that after this point, we have committed to using - * this request as it is being used to both track the - * state of engine initialisation and liveness of the - * golden renderstate above. Think twice before you try - * to cancel/unwind this request now. - */ + .get_sibling = guc_virtual_get_sibling, +}; - /* Unconditionally invalidate GPU caches and TLBs. */ - ret = request->engine->emit_flush(request, EMIT_INVALIDATE); - if (ret) - return ret; +static bool +guc_irq_enable_breadcrumbs(struct intel_breadcrumbs *b) +{ + struct intel_engine_cs *sibling; + intel_engine_mask_t tmp, mask = b->engine_mask; + bool result = false; - request->reserved_space -= GUC_REQUEST_SIZE; - return 0; + for_each_engine_masked(sibling, b->irq_engine->gt, mask, tmp) + result |= intel_engine_irq_enable(sibling); + + return result; } -static inline void queue_request(struct intel_engine_cs *engine, - struct i915_request *rq, - int prio) +static void +guc_irq_disable_breadcrumbs(struct intel_breadcrumbs *b) { - GEM_BUG_ON(!list_empty(&rq->sched.link)); - list_add_tail(&rq->sched.link, - i915_sched_lookup_priolist(engine->sched_engine, prio)); - set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); + struct intel_engine_cs *sibling; + intel_engine_mask_t tmp, mask = b->engine_mask; + + for_each_engine_masked(sibling, b->irq_engine->gt, mask, tmp) + intel_engine_irq_disable(sibling); } -static void guc_submit_request(struct i915_request *rq) +static void guc_init_breadcrumbs(struct intel_engine_cs *engine) { - struct intel_engine_cs *engine = rq->engine; - unsigned long flags; - - /* Will be called from irq-context when using foreign fences. */ - spin_lock_irqsave(&engine->sched_engine->lock, flags); + int i; + + /* + * In GuC submission mode we do not know which physical engine a request + * will be scheduled on, this creates a problem because the breadcrumb + * interrupt is per physical engine. To work around this we attach + * requests and direct all breadcrumb interrupts to the first instance + * of an engine per class. In addition all breadcrumb interrupts are + * enabled / disabled across an engine class in unison. + */ + for (i = 0; i < MAX_ENGINE_INSTANCE; ++i) { + struct intel_engine_cs *sibling = + engine->gt->engine_class[engine->class][i]; + + if (sibling) { + if (engine->breadcrumbs != sibling->breadcrumbs) { + intel_breadcrumbs_put(engine->breadcrumbs); + engine->breadcrumbs = + intel_breadcrumbs_get(sibling->breadcrumbs); + } + break; + } + } - queue_request(engine, rq, rq_prio(rq)); + if (engine->breadcrumbs) { + engine->breadcrumbs->engine_mask |= engine->mask; + engine->breadcrumbs->irq_enable = guc_irq_enable_breadcrumbs; + engine->breadcrumbs->irq_disable = guc_irq_disable_breadcrumbs; + } +} - GEM_BUG_ON(i915_sched_engine_is_empty(engine->sched_engine)); - GEM_BUG_ON(list_empty(&rq->sched.link)); +static void guc_bump_inflight_request_prio(struct i915_request *rq, + int prio) +{ + struct intel_context *ce = rq->context; + u8 new_guc_prio = map_i915_prio_to_guc_prio(prio); + + /* Short circuit function */ + if (prio < I915_PRIORITY_NORMAL || + (rq->guc_prio == GUC_PRIO_FINI) || + (rq->guc_prio != GUC_PRIO_INIT && + !new_guc_prio_higher(rq->guc_prio, new_guc_prio))) + return; + + spin_lock(&ce->guc_active.lock); + if (rq->guc_prio != GUC_PRIO_FINI) { + if (rq->guc_prio != GUC_PRIO_INIT) + sub_context_inflight_prio(ce, rq->guc_prio); + rq->guc_prio = new_guc_prio; + add_context_inflight_prio(ce, rq->guc_prio); + update_context_prio(ce); + } + spin_unlock(&ce->guc_active.lock); +} - tasklet_hi_schedule(&engine->sched_engine->tasklet); +static void guc_retire_inflight_request_prio(struct i915_request *rq) +{ + struct intel_context *ce = rq->context; - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); + spin_lock(&ce->guc_active.lock); + guc_prio_fini(rq, ce); + spin_unlock(&ce->guc_active.lock); } static void sanitize_hwsp(struct intel_engine_cs *engine) @@ -588,21 +2312,68 @@ static int guc_resume(struct intel_engine_cs *engine) return 0; } +static bool guc_sched_engine_disabled(struct i915_sched_engine *sched_engine) +{ + return !sched_engine->tasklet.callback; +} + static void guc_set_default_submission(struct intel_engine_cs *engine) { engine->submit_request = guc_submit_request; } +static inline void guc_kernel_context_pin(struct intel_guc *guc, + struct intel_context *ce) +{ + if (context_guc_id_invalid(ce)) + pin_guc_id(guc, ce); + guc_lrc_desc_pin(ce, true); +} + +static inline void guc_init_lrc_mapping(struct intel_guc *guc) +{ + struct intel_gt *gt = guc_to_gt(guc); + struct intel_engine_cs *engine; + enum intel_engine_id id; + + /* make sure all descriptors are clean... */ + xa_destroy(&guc->context_lookup); + + /* + * Some contexts might have been pinned before we enabled GuC + * submission, so we need to add them to the GuC bookeeping. + * Also, after a reset the of GuC we want to make sure that the + * information shared with GuC is properly reset. The kernel LRCs are + * not attached to the gem_context, so they need to be added separately. + * + * Note: we purposely do not check the return of guc_lrc_desc_pin, + * because that function can only fail if a reset is just starting. This + * is at the end of reset so presumably another reset isn't happening + * and even it did this code would be run again. + */ + + for_each_engine(engine, gt, id) + if (engine->kernel_context) + guc_kernel_context_pin(guc, engine->kernel_context); +} + static void guc_release(struct intel_engine_cs *engine) { engine->sanitize = NULL; /* no longer in control, nothing to sanitize */ - tasklet_kill(&engine->sched_engine->tasklet); - intel_engine_cleanup_common(engine); lrc_fini_wa_ctx(engine); } +static void virtual_guc_bump_serial(struct intel_engine_cs *engine) +{ + struct intel_engine_cs *e; + intel_engine_mask_t tmp, mask = engine->mask; + + for_each_engine_masked(e, engine->gt, mask, tmp) + e->serial++; +} + static void guc_default_vfuncs(struct intel_engine_cs *engine) { /* Default vfuncs which can be overridden by each engine. */ @@ -611,13 +2382,15 @@ static void guc_default_vfuncs(struct intel_engine_cs *engine) engine->cops = &guc_context_ops; engine->request_alloc = guc_request_alloc; + engine->add_active_request = add_to_context; + engine->remove_active_request = remove_from_context; engine->sched_engine->schedule = i915_schedule; - engine->reset.prepare = guc_reset_prepare; - engine->reset.rewind = guc_reset_rewind; - engine->reset.cancel = guc_reset_cancel; - engine->reset.finish = guc_reset_finish; + engine->reset.prepare = guc_reset_nop; + engine->reset.rewind = guc_rewind_nop; + engine->reset.cancel = guc_reset_nop; + engine->reset.finish = guc_reset_nop; engine->emit_flush = gen8_emit_flush_xcs; engine->emit_init_breadcrumb = gen8_emit_init_breadcrumb; @@ -629,13 +2402,13 @@ static void guc_default_vfuncs(struct intel_engine_cs *engine) engine->set_default_submission = guc_set_default_submission; engine->flags |= I915_ENGINE_HAS_PREEMPTION; + engine->flags |= I915_ENGINE_HAS_TIMESLICES; /* * TODO: GuC supports timeslicing and semaphores as well, but they're * handled by the firmware so some minor tweaks are required before * enabling. * - * engine->flags |= I915_ENGINE_HAS_TIMESLICES; * engine->flags |= I915_ENGINE_HAS_SEMAPHORES; */ @@ -666,9 +2439,21 @@ static inline void guc_default_irqs(struct intel_engine_cs *engine) intel_engine_set_irq_handler(engine, cs_irq_handler); } +static void guc_sched_engine_destroy(struct kref *kref) +{ + struct i915_sched_engine *sched_engine = + container_of(kref, typeof(*sched_engine), ref); + struct intel_guc *guc = sched_engine->private_data; + + guc->sched_engine = NULL; + tasklet_kill(&sched_engine->tasklet); /* flush the callback */ + kfree(sched_engine); +} + int intel_guc_submission_setup(struct intel_engine_cs *engine) { struct drm_i915_private *i915 = engine->i915; + struct intel_guc *guc = &engine->gt->uc.guc; /* * The setup relies on several assumptions (e.g. irqs always enabled) @@ -676,10 +2461,28 @@ int intel_guc_submission_setup(struct intel_engine_cs *engine) */ GEM_BUG_ON(GRAPHICS_VER(i915) < 11); - tasklet_setup(&engine->sched_engine->tasklet, guc_submission_tasklet); + if (!guc->sched_engine) { + guc->sched_engine = i915_sched_engine_create(ENGINE_VIRTUAL); + if (!guc->sched_engine) + return -ENOMEM; + + guc->sched_engine->schedule = i915_schedule; + guc->sched_engine->disabled = guc_sched_engine_disabled; + guc->sched_engine->private_data = guc; + guc->sched_engine->destroy = guc_sched_engine_destroy; + guc->sched_engine->bump_inflight_request_prio = + guc_bump_inflight_request_prio; + guc->sched_engine->retire_inflight_request_prio = + guc_retire_inflight_request_prio; + tasklet_setup(&guc->sched_engine->tasklet, + guc_submission_tasklet); + } + i915_sched_engine_put(engine->sched_engine); + engine->sched_engine = i915_sched_engine_get(guc->sched_engine); guc_default_vfuncs(engine); guc_default_irqs(engine); + guc_init_breadcrumbs(engine); if (engine->class == RENDER_CLASS) rcs_submission_override(engine); @@ -695,7 +2498,7 @@ int intel_guc_submission_setup(struct intel_engine_cs *engine) void intel_guc_submission_enable(struct intel_guc *guc) { - guc_stage_desc_init(guc); + guc_init_lrc_mapping(guc); } void intel_guc_submission_disable(struct intel_guc *guc) @@ -705,8 +2508,13 @@ void intel_guc_submission_disable(struct intel_guc *guc) GEM_BUG_ON(gt->awake); /* GT should be parked first */ /* Note: By the time we're here, GuC may have already been reset */ +} - guc_stage_desc_fini(guc); +static bool __guc_submission_supported(struct intel_guc *guc) +{ + /* GuC submission is unavailable for pre-Gen11 */ + return intel_guc_is_supported(guc) && + GRAPHICS_VER(guc_to_gt(guc)->i915) >= 11; } static bool __guc_submission_selected(struct intel_guc *guc) @@ -721,5 +2529,483 @@ static bool __guc_submission_selected(struct intel_guc *guc) void intel_guc_submission_init_early(struct intel_guc *guc) { + guc->submission_supported = __guc_submission_supported(guc); guc->submission_selected = __guc_submission_selected(guc); } + +static inline struct intel_context * +g2h_context_lookup(struct intel_guc *guc, u32 desc_idx) +{ + struct intel_context *ce; + + if (unlikely(desc_idx >= GUC_MAX_LRC_DESCRIPTORS)) { + drm_err(&guc_to_gt(guc)->i915->drm, + "Invalid desc_idx %u", desc_idx); + return NULL; + } + + ce = __get_context(guc, desc_idx); + if (unlikely(!ce)) { + drm_err(&guc_to_gt(guc)->i915->drm, + "Context is NULL, desc_idx %u", desc_idx); + return NULL; + } + + return ce; +} + +static void decr_outstanding_submission_g2h(struct intel_guc *guc) +{ + if (atomic_dec_and_test(&guc->outstanding_submission_g2h)) + wake_up_all(&guc->ct.wq); +} + +int intel_guc_deregister_done_process_msg(struct intel_guc *guc, + const u32 *msg, + u32 len) +{ + struct intel_context *ce; + u32 desc_idx = msg[0]; + + if (unlikely(len < 1)) { + drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len); + return -EPROTO; + } + + ce = g2h_context_lookup(guc, desc_idx); + if (unlikely(!ce)) + return -EPROTO; + + trace_intel_context_deregister_done(ce); + + if (context_wait_for_deregister_to_register(ce)) { + struct intel_runtime_pm *runtime_pm = + &ce->engine->gt->i915->runtime_pm; + intel_wakeref_t wakeref; + + /* + * Previous owner of this guc_id has been deregistered, now safe + * register this context. + */ + with_intel_runtime_pm(runtime_pm, wakeref) + register_context(ce, true); + guc_signal_context_fence(ce); + intel_context_put(ce); + } else if (context_destroyed(ce)) { + /* Context has been destroyed */ + release_guc_id(guc, ce); + __guc_context_destroy(ce); + } + + decr_outstanding_submission_g2h(guc); + + return 0; +} + +int intel_guc_sched_done_process_msg(struct intel_guc *guc, + const u32 *msg, + u32 len) +{ + struct intel_context *ce; + unsigned long flags; + u32 desc_idx = msg[0]; + + if (unlikely(len < 2)) { + drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len); + return -EPROTO; + } + + ce = g2h_context_lookup(guc, desc_idx); + if (unlikely(!ce)) + return -EPROTO; + + if (unlikely(context_destroyed(ce) || + (!context_pending_enable(ce) && + !context_pending_disable(ce)))) { + drm_err(&guc_to_gt(guc)->i915->drm, + "Bad context sched_state 0x%x, 0x%x, desc_idx %u", + atomic_read(&ce->guc_sched_state_no_lock), + ce->guc_state.sched_state, desc_idx); + return -EPROTO; + } + + trace_intel_context_sched_done(ce); + + if (context_pending_enable(ce)) { + clr_context_pending_enable(ce); + } else if (context_pending_disable(ce)) { + bool banned; + + /* + * Unpin must be done before __guc_signal_context_fence, + * otherwise a race exists between the requests getting + * submitted + retired before this unpin completes resulting in + * the pin_count going to zero and the context still being + * enabled. + */ + intel_context_sched_disable_unpin(ce); + + spin_lock_irqsave(&ce->guc_state.lock, flags); + banned = context_banned(ce); + clr_context_banned(ce); + clr_context_pending_disable(ce); + __guc_signal_context_fence(ce); + guc_blocked_fence_complete(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + if (banned) { + guc_cancel_context_requests(ce); + intel_engine_signal_breadcrumbs(ce->engine); + } + } + + decr_outstanding_submission_g2h(guc); + intel_context_put(ce); + + return 0; +} + +static void capture_error_state(struct intel_guc *guc, + struct intel_context *ce) +{ + struct intel_gt *gt = guc_to_gt(guc); + struct drm_i915_private *i915 = gt->i915; + struct intel_engine_cs *engine = __context_to_physical_engine(ce); + intel_wakeref_t wakeref; + + intel_engine_set_hung_context(engine, ce); + with_intel_runtime_pm(&i915->runtime_pm, wakeref) + i915_capture_error_state(gt, engine->mask); + atomic_inc(&i915->gpu_error.reset_engine_count[engine->uabi_class]); +} + +static void guc_context_replay(struct intel_context *ce) +{ + struct i915_sched_engine *sched_engine = ce->engine->sched_engine; + + __guc_reset_context(ce, true); + tasklet_hi_schedule(&sched_engine->tasklet); +} + +static void guc_handle_context_reset(struct intel_guc *guc, + struct intel_context *ce) +{ + trace_intel_context_reset(ce); + + if (likely(!intel_context_is_banned(ce))) { + capture_error_state(guc, ce); + guc_context_replay(ce); + } +} + +int intel_guc_context_reset_process_msg(struct intel_guc *guc, + const u32 *msg, u32 len) +{ + struct intel_context *ce; + int desc_idx; + + if (unlikely(len != 1)) { + drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len); + return -EPROTO; + } + + desc_idx = msg[0]; + ce = g2h_context_lookup(guc, desc_idx); + if (unlikely(!ce)) + return -EPROTO; + + guc_handle_context_reset(guc, ce); + + return 0; +} + +static struct intel_engine_cs * +guc_lookup_engine(struct intel_guc *guc, u8 guc_class, u8 instance) +{ + struct intel_gt *gt = guc_to_gt(guc); + u8 engine_class = guc_class_to_engine_class(guc_class); + + /* Class index is checked in class converter */ + GEM_BUG_ON(instance > MAX_ENGINE_INSTANCE); + + return gt->engine_class[engine_class][instance]; +} + +int intel_guc_engine_failure_process_msg(struct intel_guc *guc, + const u32 *msg, u32 len) +{ + struct intel_engine_cs *engine; + u8 guc_class, instance; + u32 reason; + + if (unlikely(len != 3)) { + drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len); + return -EPROTO; + } + + guc_class = msg[0]; + instance = msg[1]; + reason = msg[2]; + + engine = guc_lookup_engine(guc, guc_class, instance); + if (unlikely(!engine)) { + drm_err(&guc_to_gt(guc)->i915->drm, + "Invalid engine %d:%d", guc_class, instance); + return -EPROTO; + } + + intel_gt_handle_error(guc_to_gt(guc), engine->mask, + I915_ERROR_CAPTURE, + "GuC failed to reset %s (reason=0x%08x)\n", + engine->name, reason); + + return 0; +} + +void intel_guc_find_hung_context(struct intel_engine_cs *engine) +{ + struct intel_guc *guc = &engine->gt->uc.guc; + struct intel_context *ce; + struct i915_request *rq; + unsigned long index; + + /* Reset called during driver load? GuC not yet initialised! */ + if (unlikely(!guc_submission_initialized(guc))) + return; + + xa_for_each(&guc->context_lookup, index, ce) { + if (!intel_context_is_pinned(ce)) + continue; + + if (intel_engine_is_virtual(ce->engine)) { + if (!(ce->engine->mask & engine->mask)) + continue; + } else { + if (ce->engine != engine) + continue; + } + + list_for_each_entry(rq, &ce->guc_active.requests, sched.link) { + if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE) + continue; + + intel_engine_set_hung_context(engine, ce); + + /* Can only cope with one hang at a time... */ + return; + } + } +} + +void intel_guc_dump_active_requests(struct intel_engine_cs *engine, + struct i915_request *hung_rq, + struct drm_printer *m) +{ + struct intel_guc *guc = &engine->gt->uc.guc; + struct intel_context *ce; + unsigned long index; + unsigned long flags; + + /* Reset called during driver load? GuC not yet initialised! */ + if (unlikely(!guc_submission_initialized(guc))) + return; + + xa_for_each(&guc->context_lookup, index, ce) { + if (!intel_context_is_pinned(ce)) + continue; + + if (intel_engine_is_virtual(ce->engine)) { + if (!(ce->engine->mask & engine->mask)) + continue; + } else { + if (ce->engine != engine) + continue; + } + + spin_lock_irqsave(&ce->guc_active.lock, flags); + intel_engine_dump_active_requests(&ce->guc_active.requests, + hung_rq, m); + spin_unlock_irqrestore(&ce->guc_active.lock, flags); + } +} + +void intel_guc_submission_print_info(struct intel_guc *guc, + struct drm_printer *p) +{ + struct i915_sched_engine *sched_engine = guc->sched_engine; + struct rb_node *rb; + unsigned long flags; + + if (!sched_engine) + return; + + drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n", + atomic_read(&guc->outstanding_submission_g2h)); + drm_printf(p, "GuC tasklet count: %u\n\n", + atomic_read(&sched_engine->tasklet.count)); + + spin_lock_irqsave(&sched_engine->lock, flags); + drm_printf(p, "Requests in GuC submit tasklet:\n"); + for (rb = rb_first_cached(&sched_engine->queue); rb; rb = rb_next(rb)) { + struct i915_priolist *pl = to_priolist(rb); + struct i915_request *rq; + + priolist_for_each_request(rq, pl) + drm_printf(p, "guc_id=%u, seqno=%llu\n", + rq->context->guc_id, + rq->fence.seqno); + } + spin_unlock_irqrestore(&sched_engine->lock, flags); + drm_printf(p, "\n"); +} + +static inline void guc_log_context_priority(struct drm_printer *p, + struct intel_context *ce) +{ + int i; + + drm_printf(p, "\t\tPriority: %d\n", + ce->guc_prio); + drm_printf(p, "\t\tNumber Requests (lower index == higher priority)\n"); + for (i = GUC_CLIENT_PRIORITY_KMD_HIGH; + i < GUC_CLIENT_PRIORITY_NUM; ++i) { + drm_printf(p, "\t\tNumber requests in priority band[%d]: %d\n", + i, ce->guc_prio_count[i]); + } + drm_printf(p, "\n"); +} + +void intel_guc_submission_print_context_info(struct intel_guc *guc, + struct drm_printer *p) +{ + struct intel_context *ce; + unsigned long index; + + xa_for_each(&guc->context_lookup, index, ce) { + drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id); + drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca); + drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n", + ce->ring->head, + ce->lrc_reg_state[CTX_RING_HEAD]); + drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n", + ce->ring->tail, + ce->lrc_reg_state[CTX_RING_TAIL]); + drm_printf(p, "\t\tContext Pin Count: %u\n", + atomic_read(&ce->pin_count)); + drm_printf(p, "\t\tGuC ID Ref Count: %u\n", + atomic_read(&ce->guc_id_ref)); + drm_printf(p, "\t\tSchedule State: 0x%x, 0x%x\n\n", + ce->guc_state.sched_state, + atomic_read(&ce->guc_sched_state_no_lock)); + + guc_log_context_priority(p, ce); + } +} + +static struct intel_context * +guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) +{ + struct guc_virtual_engine *ve; + struct intel_guc *guc; + unsigned int n; + int err; + + ve = kzalloc(sizeof(*ve), GFP_KERNEL); + if (!ve) + return ERR_PTR(-ENOMEM); + + guc = &siblings[0]->gt->uc.guc; + + ve->base.i915 = siblings[0]->i915; + ve->base.gt = siblings[0]->gt; + ve->base.uncore = siblings[0]->uncore; + ve->base.id = -1; + + ve->base.uabi_class = I915_ENGINE_CLASS_INVALID; + ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL; + ve->base.uabi_instance = I915_ENGINE_CLASS_INVALID_VIRTUAL; + ve->base.saturated = ALL_ENGINES; + + snprintf(ve->base.name, sizeof(ve->base.name), "virtual"); + + ve->base.sched_engine = i915_sched_engine_get(guc->sched_engine); + + ve->base.cops = &virtual_guc_context_ops; + ve->base.request_alloc = guc_request_alloc; + ve->base.bump_serial = virtual_guc_bump_serial; + + ve->base.submit_request = guc_submit_request; + + ve->base.flags = I915_ENGINE_IS_VIRTUAL; + + intel_context_init(&ve->context, &ve->base); + + for (n = 0; n < count; n++) { + struct intel_engine_cs *sibling = siblings[n]; + + GEM_BUG_ON(!is_power_of_2(sibling->mask)); + if (sibling->mask & ve->base.mask) { + DRM_DEBUG("duplicate %s entry in load balancer\n", + sibling->name); + err = -EINVAL; + goto err_put; + } + + ve->base.mask |= sibling->mask; + + if (n != 0 && ve->base.class != sibling->class) { + DRM_DEBUG("invalid mixing of engine class, sibling %d, already %d\n", + sibling->class, ve->base.class); + err = -EINVAL; + goto err_put; + } else if (n == 0) { + ve->base.class = sibling->class; + ve->base.uabi_class = sibling->uabi_class; + snprintf(ve->base.name, sizeof(ve->base.name), + "v%dx%d", ve->base.class, count); + ve->base.context_size = sibling->context_size; + + ve->base.add_active_request = + sibling->add_active_request; + ve->base.remove_active_request = + sibling->remove_active_request; + ve->base.emit_bb_start = sibling->emit_bb_start; + ve->base.emit_flush = sibling->emit_flush; + ve->base.emit_init_breadcrumb = + sibling->emit_init_breadcrumb; + ve->base.emit_fini_breadcrumb = + sibling->emit_fini_breadcrumb; + ve->base.emit_fini_breadcrumb_dw = + sibling->emit_fini_breadcrumb_dw; + ve->base.breadcrumbs = + intel_breadcrumbs_get(sibling->breadcrumbs); + + ve->base.flags |= sibling->flags; + + ve->base.props.timeslice_duration_ms = + sibling->props.timeslice_duration_ms; + ve->base.props.preempt_timeout_ms = + sibling->props.preempt_timeout_ms; + } + } + + return &ve->context; + +err_put: + intel_context_put(&ve->context); + return ERR_PTR(err); +} + + + +bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve) +{ + struct intel_engine_cs *engine; + intel_engine_mask_t tmp, mask = ve->mask; + + for_each_engine_masked(engine, ve->gt, mask, tmp) + if (READ_ONCE(engine->props.heartbeat_interval_ms)) + return true; + + return false; +} diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h index 3f7005018939..c7ef44fa0c36 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h @@ -10,6 +10,7 @@ #include "intel_guc.h" +struct drm_printer; struct intel_engine_cs; void intel_guc_submission_init_early(struct intel_guc *guc); @@ -20,11 +21,24 @@ void intel_guc_submission_fini(struct intel_guc *guc); int intel_guc_preempt_work_create(struct intel_guc *guc); void intel_guc_preempt_work_destroy(struct intel_guc *guc); int intel_guc_submission_setup(struct intel_engine_cs *engine); +void intel_guc_submission_print_info(struct intel_guc *guc, + struct drm_printer *p); +void intel_guc_submission_print_context_info(struct intel_guc *guc, + struct drm_printer *p); +void intel_guc_dump_active_requests(struct intel_engine_cs *engine, + struct i915_request *hung_rq, + struct drm_printer *m); + +bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve); + +int intel_guc_wait_for_pending_msg(struct intel_guc *guc, + atomic_t *wait_var, + bool interruptible, + long timeout); static inline bool intel_guc_submission_is_supported(struct intel_guc *guc) { - /* XXX: GuC submission is unavailable for now */ - return false; + return guc->submission_supported; } static inline bool intel_guc_submission_is_wanted(struct intel_guc *guc) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c index 6d8b9233214e..da57d18d9f6b 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c @@ -34,8 +34,14 @@ static void uc_expand_default_options(struct intel_uc *uc) return; } - /* Default: enable HuC authentication only */ - i915->params.enable_guc = ENABLE_GUC_LOAD_HUC; + /* Intermediate platforms are HuC authentication only */ + if (IS_DG1(i915) || IS_ALDERLAKE_S(i915)) { + i915->params.enable_guc = ENABLE_GUC_LOAD_HUC; + return; + } + + /* Default: enable HuC authentication and GuC submission */ + i915->params.enable_guc = ENABLE_GUC_LOAD_HUC | ENABLE_GUC_SUBMISSION; } /* Reset GuC providing us with fresh state for both GuC and HuC. @@ -120,6 +126,11 @@ void intel_uc_init_early(struct intel_uc *uc) uc->ops = &uc_ops_off; } +void intel_uc_init_late(struct intel_uc *uc) +{ + intel_guc_init_late(&uc->guc); +} + void intel_uc_driver_late_release(struct intel_uc *uc) { } @@ -207,21 +218,6 @@ static void guc_handle_mmio_msg(struct intel_guc *guc) spin_unlock_irq(&guc->irq_lock); } -static void guc_reset_interrupts(struct intel_guc *guc) -{ - guc->interrupts.reset(guc); -} - -static void guc_enable_interrupts(struct intel_guc *guc) -{ - guc->interrupts.enable(guc); -} - -static void guc_disable_interrupts(struct intel_guc *guc) -{ - guc->interrupts.disable(guc); -} - static int guc_enable_communication(struct intel_guc *guc) { struct intel_gt *gt = guc_to_gt(guc); @@ -242,7 +238,7 @@ static int guc_enable_communication(struct intel_guc *guc) guc_get_mmio_msg(guc); guc_handle_mmio_msg(guc); - guc_enable_interrupts(guc); + intel_guc_enable_interrupts(guc); /* check for CT messages received before we enabled interrupts */ spin_lock_irq(>->irq_lock); @@ -265,7 +261,7 @@ static void guc_disable_communication(struct intel_guc *guc) */ guc_clear_mmio_msg(guc); - guc_disable_interrupts(guc); + intel_guc_disable_interrupts(guc); intel_guc_ct_disable(&guc->ct); @@ -323,9 +319,6 @@ static int __uc_init(struct intel_uc *uc) if (i915_inject_probe_failure(uc_to_gt(uc)->i915)) return -ENOMEM; - /* XXX: GuC submission is unavailable for now */ - GEM_BUG_ON(intel_uc_uses_guc_submission(uc)); - ret = intel_guc_init(guc); if (ret) return ret; @@ -463,7 +456,7 @@ static int __uc_init_hw(struct intel_uc *uc) if (ret) goto err_out; - guc_reset_interrupts(guc); + intel_guc_reset_interrupts(guc); /* WaEnableuKernelHeaderValidFix:skl */ /* WaEnableGuCBootHashCheckNotSet:skl,bxt,kbl */ @@ -565,23 +558,67 @@ void intel_uc_reset_prepare(struct intel_uc *uc) { struct intel_guc *guc = &uc->guc; - if (!intel_guc_is_ready(guc)) + uc->reset_in_progress = true; + + /* Nothing to do if GuC isn't supported */ + if (!intel_uc_supports_guc(uc)) return; + /* Firmware expected to be running when this function is called */ + if (!intel_guc_is_ready(guc)) + goto sanitize; + + if (intel_uc_uses_guc_submission(uc)) + intel_guc_submission_reset_prepare(guc); + +sanitize: __uc_sanitize(uc); } +void intel_uc_reset(struct intel_uc *uc, bool stalled) +{ + struct intel_guc *guc = &uc->guc; + + /* Firmware can not be running when this function is called */ + if (intel_uc_uses_guc_submission(uc)) + intel_guc_submission_reset(guc, stalled); +} + +void intel_uc_reset_finish(struct intel_uc *uc) +{ + struct intel_guc *guc = &uc->guc; + + uc->reset_in_progress = false; + + /* Firmware expected to be running when this function is called */ + if (intel_guc_is_fw_running(guc) && intel_uc_uses_guc_submission(uc)) + intel_guc_submission_reset_finish(guc); +} + +void intel_uc_cancel_requests(struct intel_uc *uc) +{ + struct intel_guc *guc = &uc->guc; + + /* Firmware can not be running when this function is called */ + if (intel_uc_uses_guc_submission(uc)) + intel_guc_submission_cancel_requests(guc); +} + void intel_uc_runtime_suspend(struct intel_uc *uc) { struct intel_guc *guc = &uc->guc; - int err; if (!intel_guc_is_ready(guc)) return; - err = intel_guc_suspend(guc); - if (err) - DRM_DEBUG_DRIVER("Failed to suspend GuC, err=%d", err); + /* + * Wait for any outstanding CTB before tearing down communication /w the + * GuC. + */ +#define OUTSTANDING_CTB_TIMEOUT_PERIOD (HZ / 5) + intel_guc_wait_for_pending_msg(guc, &guc->outstanding_submission_g2h, + false, OUTSTANDING_CTB_TIMEOUT_PERIOD); + GEM_WARN_ON(atomic_read(&guc->outstanding_submission_g2h)); guc_disable_communication(guc); } @@ -590,12 +627,16 @@ void intel_uc_suspend(struct intel_uc *uc) { struct intel_guc *guc = &uc->guc; intel_wakeref_t wakeref; + int err; if (!intel_guc_is_ready(guc)) return; - with_intel_runtime_pm(uc_to_gt(uc)->uncore->rpm, wakeref) - intel_uc_runtime_suspend(uc); + with_intel_runtime_pm(&uc_to_gt(uc)->i915->runtime_pm, wakeref) { + err = intel_guc_suspend(guc); + if (err) + DRM_DEBUG_DRIVER("Failed to suspend GuC, err=%d", err); + } } static int __uc_resume(struct intel_uc *uc, bool enable_communication) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h index 9c954c589edf..e2da2b6e76e1 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h @@ -30,13 +30,19 @@ struct intel_uc { /* Snapshot of GuC log from last failed load */ struct drm_i915_gem_object *load_err_log; + + bool reset_in_progress; }; void intel_uc_init_early(struct intel_uc *uc); +void intel_uc_init_late(struct intel_uc *uc); void intel_uc_driver_late_release(struct intel_uc *uc); void intel_uc_driver_remove(struct intel_uc *uc); void intel_uc_init_mmio(struct intel_uc *uc); void intel_uc_reset_prepare(struct intel_uc *uc); +void intel_uc_reset(struct intel_uc *uc, bool stalled); +void intel_uc_reset_finish(struct intel_uc *uc); +void intel_uc_cancel_requests(struct intel_uc *uc); void intel_uc_suspend(struct intel_uc *uc); void intel_uc_runtime_suspend(struct intel_uc *uc); int intel_uc_resume(struct intel_uc *uc); @@ -81,6 +87,11 @@ uc_state_checkers(guc, guc_submission); #undef uc_state_checkers #undef __uc_state_checker +static inline int intel_uc_wait_for_idle(struct intel_uc *uc, long timeout) +{ + return intel_guc_wait_for_idle(&uc->guc, timeout); +} + #define intel_uc_ops_function(_NAME, _OPS, _TYPE, _RET) \ static inline _TYPE intel_uc_##_NAME(struct intel_uc *uc) \ { \ diff --git a/drivers/gpu/drm/i915/i915_debugfs_params.c b/drivers/gpu/drm/i915/i915_debugfs_params.c index 4e2b077692cb..8ecd8b42f048 100644 --- a/drivers/gpu/drm/i915/i915_debugfs_params.c +++ b/drivers/gpu/drm/i915/i915_debugfs_params.c @@ -6,9 +6,20 @@ #include #include "i915_debugfs_params.h" +#include "gt/intel_gt.h" +#include "gt/uc/intel_guc.h" #include "i915_drv.h" #include "i915_params.h" +#define MATCH_DEBUGFS_NODE_NAME(_file, _name) (strcmp((_file)->f_path.dentry->d_name.name, (_name)) == 0) + +#define GET_I915(i915, name, ptr) \ + do { \ + struct i915_params *params; \ + params = container_of(((void *) (ptr)), typeof(*params), name); \ + (i915) = container_of(params, typeof(*(i915)), params); \ + } while(0) + /* int param */ static int i915_param_int_show(struct seq_file *m, void *data) { @@ -24,6 +35,16 @@ static int i915_param_int_open(struct inode *inode, struct file *file) return single_open(file, i915_param_int_show, inode->i_private); } +static int notify_guc(struct drm_i915_private *i915) +{ + int ret = 0; + + if (intel_uc_uses_guc_submission(&i915->gt.uc)) + ret = intel_guc_global_policies_update(&i915->gt.uc.guc); + + return ret; +} + static ssize_t i915_param_int_write(struct file *file, const char __user *ubuf, size_t len, loff_t *offp) @@ -81,8 +102,10 @@ static ssize_t i915_param_uint_write(struct file *file, const char __user *ubuf, size_t len, loff_t *offp) { + struct drm_i915_private *i915; struct seq_file *m = file->private_data; unsigned int *value = m->private; + unsigned int old = *value; int ret; ret = kstrtouint_from_user(ubuf, len, 0, value); @@ -95,6 +118,14 @@ static ssize_t i915_param_uint_write(struct file *file, *value = b; } + if (!ret && MATCH_DEBUGFS_NODE_NAME(file, "reset")) { + GET_I915(i915, reset, value); + + ret = notify_guc(i915); + if (ret) + *value = old; + } + return ret ?: len; } diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index 4d2d59a9942b..2b73ddb11c66 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -27,6 +27,7 @@ */ #include "gem/i915_gem_context.h" +#include "gt/intel_gt.h" #include "gt/intel_gt_requests.h" #include "i915_drv.h" diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index a2c58b54a592..0f08bcfbe964 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1429,20 +1429,37 @@ capture_engine(struct intel_engine_cs *engine, { struct intel_engine_capture_vma *capture = NULL; struct intel_engine_coredump *ee; - struct i915_request *rq; + struct intel_context *ce; + struct i915_request *rq = NULL; unsigned long flags; ee = intel_engine_coredump_alloc(engine, GFP_KERNEL); if (!ee) return NULL; - spin_lock_irqsave(&engine->sched_engine->lock, flags); - rq = intel_engine_find_active_request(engine); + ce = intel_engine_get_hung_context(engine); + if (ce) { + intel_engine_clear_hung_context(engine); + rq = intel_context_find_active_request(ce); + if (!rq || !i915_request_started(rq)) + goto no_request_capture; + } else { + /* + * Getting here with GuC enabled means it is a forced error capture + * with no actual hang. So, no need to attempt the execlist search. + */ + if (!intel_uc_uses_guc_submission(&engine->gt->uc)) { + spin_lock_irqsave(&engine->sched_engine->lock, flags); + rq = intel_engine_execlist_find_hung_request(engine); + spin_unlock_irqrestore(&engine->sched_engine->lock, + flags); + } + } if (rq) capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL); - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); if (!capture) { +no_request_capture: kfree(ee); return NULL; } diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 943fe485c662..3584e4d03dc3 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -4142,6 +4142,7 @@ enum { FAULT_AND_CONTINUE /* Unsupported */ }; +#define CTX_GTT_ADDRESS_MASK GENMASK(31, 12) #define GEN8_CTX_VALID (1 << 0) #define GEN8_CTX_FORCE_PD_RESTORE (1 << 1) #define GEN8_CTX_FORCE_RESTORE (1 << 2) @@ -12308,6 +12309,7 @@ enum skl_power_gate { /* MOCS (Memory Object Control State) registers */ #define GEN9_LNCFCMOCS(i) _MMIO(0xb020 + (i) * 4) /* L3 Cache Control */ +#define GEN9_LNCFCMOCS_REG_COUNT 32 #define __GEN9_RCS0_MOCS0 0xc800 #define GEN9_GFX_MOCS(i) _MMIO(__GEN9_RCS0_MOCS0 + (i) * 4) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 09ebea9a0090..329c61595f02 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -114,6 +114,9 @@ static void i915_fence_release(struct dma_fence *fence) { struct i915_request *rq = to_request(fence); + GEM_BUG_ON(rq->guc_prio != GUC_PRIO_INIT && + rq->guc_prio != GUC_PRIO_FINI); + /* * The request is put onto a RCU freelist (i.e. the address * is immediately reused), mark the fences as being freed now. @@ -125,39 +128,17 @@ static void i915_fence_release(struct dma_fence *fence) i915_sw_fence_fini(&rq->semaphore); /* - * Keep one request on each engine for reserved use under mempressure - * - * We do not hold a reference to the engine here and so have to be - * very careful in what rq->engine we poke. The virtual engine is - * referenced via the rq->context and we released that ref during - * i915_request_retire(), ergo we must not dereference a virtual - * engine here. Not that we would want to, as the only consumer of - * the reserved engine->request_pool is the power management parking, - * which must-not-fail, and that is only run on the physical engines. - * - * Since the request must have been executed to be have completed, - * we know that it will have been processed by the HW and will - * not be unsubmitted again, so rq->engine and rq->execution_mask - * at this point is stable. rq->execution_mask will be a single - * bit if the last and _only_ engine it could execution on was a - * physical engine, if it's multiple bits then it started on and - * could still be on a virtual engine. Thus if the mask is not a - * power-of-two we assume that rq->engine may still be a virtual - * engine and so a dangling invalid pointer that we cannot dereference - * - * For example, consider the flow of a bonded request through a virtual - * engine. The request is created with a wide engine mask (all engines - * that we might execute on). On processing the bond, the request mask - * is reduced to one or more engines. If the request is subsequently - * bound to a single engine, it will then be constrained to only - * execute on that engine and never returned to the virtual engine - * after timeslicing away, see __unwind_incomplete_requests(). Thus we - * know that if the rq->execution_mask is a single bit, rq->engine - * can be a physical engine with the exact corresponding mask. + * Keep one request on each engine for reserved use under mempressure, + * do not use with virtual engines as this really is only needed for + * kernel contexts. */ - if (is_power_of_2(rq->execution_mask) && - !cmpxchg(&rq->engine->request_pool, NULL, rq)) + if (!intel_engine_is_virtual(rq->engine) && + !cmpxchg(&rq->engine->request_pool, NULL, rq)) { + intel_context_put(rq->context); return; + } + + intel_context_put(rq->context); kmem_cache_free(global.slab_requests, rq); } @@ -204,7 +185,7 @@ static bool irq_work_imm(struct irq_work *wrk) return false; } -static void __notify_execute_cb_imm(struct i915_request *rq) +void i915_request_notify_execute_cb_imm(struct i915_request *rq) { __notify_execute_cb(rq, irq_work_imm); } @@ -278,37 +259,6 @@ i915_request_active_engine(struct i915_request *rq, return ret; } - -static void remove_from_engine(struct i915_request *rq) -{ - struct intel_engine_cs *engine, *locked; - - /* - * Virtual engines complicate acquiring the engine timeline lock, - * as their rq->engine pointer is not stable until under that - * engine lock. The simple ploy we use is to take the lock then - * check that the rq still belongs to the newly locked engine. - */ - locked = READ_ONCE(rq->engine); - spin_lock_irq(&locked->sched_engine->lock); - while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) { - spin_unlock(&locked->sched_engine->lock); - spin_lock(&engine->sched_engine->lock); - locked = engine; - } - list_del_init(&rq->sched.link); - - clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); - clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags); - - /* Prevent further __await_execution() registering a cb, then flush */ - set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags); - - spin_unlock_irq(&locked->sched_engine->lock); - - __notify_execute_cb_imm(rq); -} - static void __rq_init_watchdog(struct i915_request *rq) { rq->watchdog.timer.function = NULL; @@ -405,8 +355,7 @@ bool i915_request_retire(struct i915_request *rq) * after removing the breadcrumb and signaling it, so that we do not * inadvertently attach the breadcrumb to a completed request. */ - if (!list_empty(&rq->sched.link)) - remove_from_engine(rq); + rq->engine->remove_active_request(rq); GEM_BUG_ON(!llist_empty(&rq->execute_cb)); __list_del_entry(&rq->link); /* poison neither prev/next (RCU walks) */ @@ -431,6 +380,7 @@ void i915_request_retire_upto(struct i915_request *rq) do { tmp = list_first_entry(&tl->requests, typeof(*tmp), link); + GEM_BUG_ON(!i915_request_completed(tmp)); } while (i915_request_retire(tmp) && tmp != rq); } @@ -536,7 +486,7 @@ __await_execution(struct i915_request *rq, if (llist_add(&cb->work.node.llist, &signal->execute_cb)) { if (i915_request_is_active(signal) || __request_in_flight(signal)) - __notify_execute_cb_imm(signal); + i915_request_notify_execute_cb_imm(signal); } return 0; @@ -667,11 +617,15 @@ bool __i915_request_submit(struct i915_request *request) request->ring->vaddr + request->postfix); trace_i915_request_execute(request); - engine->serial++; + if (engine->bump_serial) + engine->bump_serial(engine); + else + engine->serial++; + result = true; GEM_BUG_ON(test_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags)); - list_move_tail(&request->sched.link, &engine->sched_engine->requests); + engine->add_active_request(request); active: clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags); set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags); @@ -759,18 +713,6 @@ void i915_request_unsubmit(struct i915_request *request) spin_unlock_irqrestore(&engine->sched_engine->lock, flags); } -static void __cancel_request(struct i915_request *rq) -{ - struct intel_engine_cs *engine = NULL; - - i915_request_active_engine(rq, &engine); - - if (engine && intel_engine_pulse(engine)) - intel_gt_handle_error(engine->gt, engine->mask, 0, - "request cancellation by %s", - current->comm); -} - void i915_request_cancel(struct i915_request *rq, int error) { if (!i915_request_set_error_once(rq, error)) @@ -778,7 +720,7 @@ void i915_request_cancel(struct i915_request *rq, int error) set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags); - __cancel_request(rq); + intel_context_cancel_request(rq->context, rq); } static int __i915_sw_fence_call @@ -950,7 +892,19 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) } } - rq->context = ce; + /* + * Hold a reference to the intel_context over life of an i915_request. + * Without this an i915_request can exist after the context has been + * destroyed (e.g. request retired, context closed, but user space holds + * a reference to the request from an out fence). In the case of GuC + * submission + virtual engine, the engine that the request references + * is also destroyed which can trigger bad pointer dref in fence ops + * (e.g. i915_fence_get_driver_name). We could likely change these + * functions to avoid touching the engine but let's just be safe and + * hold the intel_context reference. In execlist mode the request always + * eventually points to a physical engine so this isn't an issue. + */ + rq->context = intel_context_get(ce); rq->engine = ce->engine; rq->ring = ce->ring; rq->execution_mask = ce->engine->mask; @@ -973,6 +927,8 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) rq->rcustate = get_state_synchronize_rcu(); /* acts as smp_mb() */ + rq->guc_prio = GUC_PRIO_INIT; + /* We bump the ref for the fence chain */ i915_sw_fence_reinit(&i915_request_get(rq)->submit); i915_sw_fence_reinit(&i915_request_get(rq)->semaphore); @@ -1027,6 +983,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) GEM_BUG_ON(!list_empty(&rq->sched.waiters_list)); err_free: + intel_context_put(ce); kmem_cache_free(global.slab_requests, rq); err_unreserve: intel_context_unpin(ce); @@ -1379,6 +1336,9 @@ i915_request_await_external(struct i915_request *rq, struct dma_fence *fence) return err; } +static int +i915_request_await_request(struct i915_request *to, struct i915_request *from); + int i915_request_await_execution(struct i915_request *rq, struct dma_fence *fence) @@ -1462,7 +1422,8 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from) return ret; } - if (is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask))) + if (!intel_engine_uses_guc(to->engine) && + is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask))) ret = await_request_submit(to, from); else ret = emit_semaphore_wait(to, from, I915_FENCE_GFP); @@ -1621,6 +1582,8 @@ __i915_request_add_to_timeline(struct i915_request *rq) prev = to_request(__i915_active_fence_set(&timeline->last_request, &rq->fence)); if (prev && !__i915_request_is_complete(prev)) { + bool uses_guc = intel_engine_uses_guc(rq->engine); + /* * The requests are supposed to be kept in order. However, * we need to be wary in case the timeline->last_request @@ -1631,7 +1594,8 @@ __i915_request_add_to_timeline(struct i915_request *rq) i915_seqno_passed(prev->fence.seqno, rq->fence.seqno)); - if (is_power_of_2(READ_ONCE(prev->engine)->mask | rq->engine->mask)) + if ((!uses_guc && is_power_of_2(READ_ONCE(prev->engine)->mask | rq->engine->mask)) || + (uses_guc && prev->context == rq->context)) i915_sw_fence_await_sw_fence(&rq->submit, &prev->submit, &rq->submitq); @@ -2072,6 +2036,47 @@ void i915_request_show(struct drm_printer *m, name); } +static bool engine_match_ring(struct intel_engine_cs *engine, struct i915_request *rq) +{ + u32 ring = ENGINE_READ(engine, RING_START); + + return ring == i915_ggtt_offset(rq->ring->vma); +} + +static bool match_ring(struct i915_request *rq) +{ + struct intel_engine_cs *engine; + bool found; + int i; + + if (!intel_engine_is_virtual(rq->engine)) + return engine_match_ring(rq->engine, rq); + + found = false; + i = 0; + while ((engine = intel_engine_get_sibling(rq->engine, i++))) { + found = engine_match_ring(engine, rq); + if (found) + break; + } + + return found; +} + +enum i915_request_state i915_test_request_state(struct i915_request *rq) +{ + if (i915_request_completed(rq)) + return I915_REQUEST_COMPLETE; + + if (!i915_request_started(rq)) + return I915_REQUEST_PENDING; + + if (match_ring(rq)) + return I915_REQUEST_ACTIVE; + + return I915_REQUEST_QUEUED; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftests/mock_request.c" #include "selftests/i915_request.c" diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index 5deb65ec5fa5..f0463d19c712 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -285,6 +285,22 @@ struct i915_request { struct hrtimer timer; } watchdog; + /* + * Requests may need to be stalled when using GuC submission waiting for + * certain GuC operations to complete. If that is the case, stalled + * requests are added to a per context list of stalled requests. The + * below list_head is the link in that list. + */ + struct list_head guc_fence_link; + + /** + * Priority level while the request is inflight. Differs slightly than + * i915 scheduler priority. + */ +#define GUC_PRIO_INIT 0xff +#define GUC_PRIO_FINI 0xfe + u8 guc_prio; + I915_SELFTEST_DECLARE(struct { struct list_head link; unsigned long delay; @@ -639,4 +655,17 @@ bool i915_request_active_engine(struct i915_request *rq, struct intel_engine_cs **active); +void i915_request_notify_execute_cb_imm(struct i915_request *rq); + +enum i915_request_state +{ + I915_REQUEST_UNKNOWN = 0, + I915_REQUEST_COMPLETE, + I915_REQUEST_PENDING, + I915_REQUEST_QUEUED, + I915_REQUEST_ACTIVE, +}; + +enum i915_request_state i915_test_request_state(struct i915_request *rq); + #endif /* I915_REQUEST_H */ diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 3a58a9130309..3fccae3672c1 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -241,6 +241,9 @@ static void __i915_schedule(struct i915_sched_node *node, /* Fifo and depth-first replacement ensure our deps execute before us */ sched_engine = lock_sched_engine(node, sched_engine, &cache); list_for_each_entry_safe_reverse(dep, p, &dfs, dfs_link) { + struct i915_request *from = container_of(dep->signaler, + struct i915_request, + sched); INIT_LIST_HEAD(&dep->dfs_link); node = dep->signaler; @@ -254,6 +257,10 @@ static void __i915_schedule(struct i915_sched_node *node, GEM_BUG_ON(node_to_request(node)->engine->sched_engine != sched_engine); + /* Must be called before changing the nodes priority */ + if (sched_engine->bump_inflight_request_prio) + sched_engine->bump_inflight_request_prio(from, prio); + WRITE_ONCE(node->attr.priority, prio); /* @@ -431,7 +438,7 @@ void i915_request_show_with_schedule(struct drm_printer *m, rcu_read_unlock(); } -void i915_sched_engine_free(struct kref *kref) +static void default_destroy(struct kref *kref) { struct i915_sched_engine *sched_engine = container_of(kref, typeof(*sched_engine), ref); @@ -440,6 +447,11 @@ void i915_sched_engine_free(struct kref *kref) kfree(sched_engine); } +static bool default_disabled(struct i915_sched_engine *sched_engine) +{ + return false; +} + struct i915_sched_engine * i915_sched_engine_create(unsigned int subclass) { @@ -453,6 +465,8 @@ i915_sched_engine_create(unsigned int subclass) sched_engine->queue = RB_ROOT_CACHED; sched_engine->queue_priority_hint = INT_MIN; + sched_engine->destroy = default_destroy; + sched_engine->disabled = default_disabled; INIT_LIST_HEAD(&sched_engine->requests); INIT_LIST_HEAD(&sched_engine->hold); diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index 650ab8e0db9f..f4d9811ade5b 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -51,8 +51,6 @@ static inline void i915_priolist_free(struct i915_priolist *p) struct i915_sched_engine * i915_sched_engine_create(unsigned int subclass); -void i915_sched_engine_free(struct kref *kref); - static inline struct i915_sched_engine * i915_sched_engine_get(struct i915_sched_engine *sched_engine) { @@ -63,7 +61,7 @@ i915_sched_engine_get(struct i915_sched_engine *sched_engine) static inline void i915_sched_engine_put(struct i915_sched_engine *sched_engine) { - kref_put(&sched_engine->ref, i915_sched_engine_free); + kref_put(&sched_engine->ref, sched_engine->destroy); } static inline bool @@ -98,4 +96,10 @@ void i915_request_show_with_schedule(struct drm_printer *m, const char *prefix, int indent); +static inline bool +i915_sched_engine_disabled(struct i915_sched_engine *sched_engine) +{ + return sched_engine->disabled(sched_engine); +} + #endif /* _I915_SCHEDULER_H_ */ diff --git a/drivers/gpu/drm/i915/i915_scheduler_types.h b/drivers/gpu/drm/i915/i915_scheduler_types.h index 5935c3152bdc..b0a1b58c7893 100644 --- a/drivers/gpu/drm/i915/i915_scheduler_types.h +++ b/drivers/gpu/drm/i915/i915_scheduler_types.h @@ -163,12 +163,34 @@ struct i915_sched_engine { */ void *private_data; + /** + * @destroy: destroy schedule engine / cleanup in backend + */ + void (*destroy)(struct kref *kref); + + /** + * @disabled: check if backend has disabled submission + */ + bool (*disabled)(struct i915_sched_engine *sched_engine); + /** * @kick_backend: kick backend after a request's priority has changed */ void (*kick_backend)(const struct i915_request *rq, int prio); + /** + * @bump_inflight_request_prio: update priority of an inflight request + */ + void (*bump_inflight_request_prio)(struct i915_request *rq, + int prio); + + /** + * @retire_inflight_request_prio: indicate request is retired to + * priority tracking + */ + void (*retire_inflight_request_prio)(struct i915_request *rq); + /** * @schedule: adjust priority of request * diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h index 6778ad2a14a4..fb376e8041ae 100644 --- a/drivers/gpu/drm/i915/i915_trace.h +++ b/drivers/gpu/drm/i915/i915_trace.h @@ -794,30 +794,40 @@ DECLARE_EVENT_CLASS(i915_request, TP_STRUCT__entry( __field(u32, dev) __field(u64, ctx) + __field(u32, guc_id) __field(u16, class) __field(u16, instance) __field(u32, seqno) + __field(u32, tail) ), TP_fast_assign( __entry->dev = rq->engine->i915->drm.primary->index; __entry->class = rq->engine->uabi_class; __entry->instance = rq->engine->uabi_instance; + __entry->guc_id = rq->context->guc_id; __entry->ctx = rq->fence.context; __entry->seqno = rq->fence.seqno; + __entry->tail = rq->tail; ), - TP_printk("dev=%u, engine=%u:%u, ctx=%llu, seqno=%u", + TP_printk("dev=%u, engine=%u:%u, guc_id=%u, ctx=%llu, seqno=%u, tail=%u", __entry->dev, __entry->class, __entry->instance, - __entry->ctx, __entry->seqno) + __entry->guc_id, __entry->ctx, __entry->seqno, + __entry->tail) ); DEFINE_EVENT(i915_request, i915_request_add, - TP_PROTO(struct i915_request *rq), - TP_ARGS(rq) + TP_PROTO(struct i915_request *rq), + TP_ARGS(rq) ); #if defined(CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS) +DEFINE_EVENT(i915_request, i915_request_guc_submit, + TP_PROTO(struct i915_request *rq), + TP_ARGS(rq) +); + DEFINE_EVENT(i915_request, i915_request_submit, TP_PROTO(struct i915_request *rq), TP_ARGS(rq) @@ -885,8 +895,114 @@ TRACE_EVENT(i915_request_out, __entry->ctx, __entry->seqno, __entry->completed) ); +DECLARE_EVENT_CLASS(intel_context, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce), + + TP_STRUCT__entry( + __field(u32, guc_id) + __field(int, pin_count) + __field(u32, sched_state) + __field(u32, guc_sched_state_no_lock) + __field(u8, guc_prio) + ), + + TP_fast_assign( + __entry->guc_id = ce->guc_id; + __entry->pin_count = atomic_read(&ce->pin_count); + __entry->sched_state = ce->guc_state.sched_state; + __entry->guc_sched_state_no_lock = + atomic_read(&ce->guc_sched_state_no_lock); + __entry->guc_prio = ce->guc_prio; + ), + + TP_printk("guc_id=%d, pin_count=%d sched_state=0x%x,0x%x, guc_prio=%u", + __entry->guc_id, __entry->pin_count, __entry->sched_state, + __entry->guc_sched_state_no_lock, __entry->guc_prio) +); + +DEFINE_EVENT(intel_context, intel_context_set_prio, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_reset, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_ban, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_register, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_deregister, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_deregister_done, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_sched_enable, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_sched_disable, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_sched_done, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_create, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_fence_release, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_free, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_steal_guc_id, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_do_pin, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + +DEFINE_EVENT(intel_context, intel_context_do_unpin, + TP_PROTO(struct intel_context *ce), + TP_ARGS(ce) +); + #else #if !defined(TRACE_HEADER_MULTI_READ) +static inline void +trace_i915_request_guc_submit(struct i915_request *rq) +{ +} + static inline void trace_i915_request_submit(struct i915_request *rq) { @@ -906,6 +1022,81 @@ static inline void trace_i915_request_out(struct i915_request *rq) { } + +static inline void +trace_intel_context_set_prio(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_reset(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_ban(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_register(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_deregister(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_deregister_done(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_sched_enable(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_sched_disable(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_sched_done(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_create(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_fence_release(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_free(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_steal_guc_id(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_do_pin(struct intel_context *ce) +{ +} + +static inline void +trace_intel_context_do_unpin(struct intel_context *ce) +{ +} #endif #endif diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c index bd5c96a77ba3..d67710d10615 100644 --- a/drivers/gpu/drm/i915/selftests/i915_request.c +++ b/drivers/gpu/drm/i915/selftests/i915_request.c @@ -1313,7 +1313,7 @@ static int __live_parallel_engine1(void *arg) i915_request_add(rq); err = 0; - if (i915_request_wait(rq, 0, HZ / 5) < 0) + if (i915_request_wait(rq, 0, HZ) < 0) err = -ETIME; i915_request_put(rq); if (err) @@ -1419,7 +1419,7 @@ static int __live_parallel_spin(void *arg) } igt_spinner_end(&spin); - if (err == 0 && i915_request_wait(rq, 0, HZ / 5) < 0) + if (err == 0 && i915_request_wait(rq, 0, HZ) < 0) err = -EIO; i915_request_put(rq); diff --git a/drivers/gpu/drm/i915/selftests/igt_flush_test.c b/drivers/gpu/drm/i915/selftests/igt_flush_test.c index 7b0939e3f007..a6c71fca61aa 100644 --- a/drivers/gpu/drm/i915/selftests/igt_flush_test.c +++ b/drivers/gpu/drm/i915/selftests/igt_flush_test.c @@ -19,7 +19,7 @@ int igt_flush_test(struct drm_i915_private *i915) cond_resched(); - if (intel_gt_wait_for_idle(gt, HZ / 5) == -ETIME) { + if (intel_gt_wait_for_idle(gt, HZ) == -ETIME) { pr_err("%pS timed out, cancelling all further testing.\n", __builtin_return_address(0)); diff --git a/drivers/gpu/drm/i915/selftests/igt_live_test.c b/drivers/gpu/drm/i915/selftests/igt_live_test.c index c130010a7033..1c721542e277 100644 --- a/drivers/gpu/drm/i915/selftests/igt_live_test.c +++ b/drivers/gpu/drm/i915/selftests/igt_live_test.c @@ -5,7 +5,7 @@ */ #include "i915_drv.h" -#include "gt/intel_gt_requests.h" +#include "gt/intel_gt.h" #include "../i915_selftest.h" #include "igt_flush_test.h" diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c new file mode 100644 index 000000000000..ebd6d69b3315 --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c @@ -0,0 +1,89 @@ +/* + * SPDX-License-Identifier: MIT + * + * Copyright © 2018 Intel Corporation + */ + +//#include "gt/intel_engine_user.h" +#include "gt/intel_gt.h" +#include "i915_drv.h" +#include "i915_selftest.h" + +#include "selftests/intel_scheduler_helpers.h" + +#define REDUCED_TIMESLICE 5 +#define REDUCED_PREEMPT 10 +#define WAIT_FOR_RESET_TIME 10000 + +int intel_selftest_modify_policy(struct intel_engine_cs *engine, + struct intel_selftest_saved_policy *saved, + u32 modify_type) + +{ + int err; + + saved->reset = engine->i915->params.reset; + saved->flags = engine->flags; + saved->timeslice = engine->props.timeslice_duration_ms; + saved->preempt_timeout = engine->props.preempt_timeout_ms; + + switch (modify_type) { + case SELFTEST_SCHEDULER_MODIFY_FAST_RESET: + /* + * Enable force pre-emption on time slice expiration + * together with engine reset on pre-emption timeout. + * This is required to make the GuC notice and reset + * the single hanging context. + * Also, reduce the preemption timeout to something + * small to speed the test up. + */ + engine->i915->params.reset = 2; + engine->flags |= I915_ENGINE_WANT_FORCED_PREEMPTION; + engine->props.timeslice_duration_ms = REDUCED_TIMESLICE; + engine->props.preempt_timeout_ms = REDUCED_PREEMPT; + break; + + case SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK: + engine->props.preempt_timeout_ms = 0; + break; + + default: + pr_err("Invalid scheduler policy modification type: %d!\n", modify_type); + return -EINVAL; + } + + if (!intel_engine_uses_guc(engine)) + return 0; + + err = intel_guc_global_policies_update(&engine->gt->uc.guc); + if (err) + intel_selftest_restore_policy(engine, saved); + + return err; +} + +int intel_selftest_restore_policy(struct intel_engine_cs *engine, + struct intel_selftest_saved_policy *saved) +{ + /* Restore the original policies */ + engine->i915->params.reset = saved->reset; + engine->flags = saved->flags; + engine->props.timeslice_duration_ms = saved->timeslice; + engine->props.preempt_timeout_ms = saved->preempt_timeout; + + if (!intel_engine_uses_guc(engine)) + return 0; + + return intel_guc_global_policies_update(&engine->gt->uc.guc); +} + +int intel_selftest_wait_for_rq(struct i915_request *rq) +{ + long ret; + + ret = i915_request_wait(rq, 0, WAIT_FOR_RESET_TIME); + if (ret < 0) + return ret; + + return 0; +} diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h new file mode 100644 index 000000000000..050bc5a8ba8b --- /dev/null +++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2014-2019 Intel Corporation + */ + +#ifndef _INTEL_SELFTEST_SCHEDULER_HELPERS_H_ +#define _INTEL_SELFTEST_SCHEDULER_HELPERS_H_ + +#include + +struct i915_request; +struct intel_engine_cs; + +struct intel_selftest_saved_policy +{ + u32 flags; + u32 reset; + u64 timeslice; + u64 preempt_timeout; +}; + +enum selftest_scheduler_modify +{ + SELFTEST_SCHEDULER_MODIFY_NO_HANGCHECK = 0, + SELFTEST_SCHEDULER_MODIFY_FAST_RESET, +}; + +int intel_selftest_modify_policy(struct intel_engine_cs *engine, + struct intel_selftest_saved_policy *saved, + enum selftest_scheduler_modify modify_type); +int intel_selftest_restore_policy(struct intel_engine_cs *engine, + struct intel_selftest_saved_policy *saved); +int intel_selftest_wait_for_rq( struct i915_request *rq); + +#endif diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c index d189c4bd4bef..4f8180146888 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -52,7 +52,8 @@ void mock_device_flush(struct drm_i915_private *i915) do { for_each_engine(engine, gt, id) mock_engine_flush(engine); - } while (intel_gt_retire_requests_timeout(gt, MAX_SCHEDULE_TIMEOUT)); + } while (intel_gt_retire_requests_timeout(gt, MAX_SCHEDULE_TIMEOUT, + NULL)); } static void mock_device_release(struct drm_device *dev) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index e20eeeca7a1c..44919d0848a0 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -572,6 +572,15 @@ typedef struct drm_i915_irq_wait { #define I915_SCHEDULER_CAP_PREEMPTION (1ul << 2) #define I915_SCHEDULER_CAP_SEMAPHORES (1ul << 3) #define I915_SCHEDULER_CAP_ENGINE_BUSY_STATS (1ul << 4) +/* + * Indicates the 2k user priority levels are statically mapped into 3 buckets as + * follows: + * + * -1k to -1 Low priority + * 0 Normal priority + * 1 to 1k Highest priority + */ +#define I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP (1ul << 5) #define I915_PARAM_HUC_STATUS 42 From patchwork Tue Jul 20 20:57:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2F6CC07E95 for ; Tue, 20 Jul 2021 20:40:46 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C32D560FEA for ; Tue, 20 Jul 2021 20:40:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C32D560FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A664F6E50C; Tue, 20 Jul 2021 20:40:22 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id C9F056E4E3; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421878" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421878" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906056" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 02/42] drm/i915/guc: Allow flexible number of context ids Date: Tue, 20 Jul 2021 13:57:22 -0700 Message-Id: <20210720205802.39610-3-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Number of available GuC contexts ids might be limited. Stop refering in code to macro and use variable instead. Signed-off-by: Michal Wajdeczko Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/uc/intel_guc.h | 2 ++ drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 15 ++++++++------- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 5d94cf482516..8f4f44f71a39 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -53,6 +53,8 @@ struct intel_guc { */ spinlock_t contexts_lock; struct ida guc_ids; + u32 num_guc_ids; + u32 max_guc_ids; struct list_head guc_id_list; bool submission_supported; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 259f79dfe7bb..46a149d447e6 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -272,7 +272,7 @@ static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index) { struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr; - GEM_BUG_ON(index >= GUC_MAX_LRC_DESCRIPTORS); + GEM_BUG_ON(index >= guc->max_guc_ids); return &base[index]; } @@ -281,7 +281,7 @@ static inline struct intel_context *__get_context(struct intel_guc *guc, u32 id) { struct intel_context *ce = xa_load(&guc->context_lookup, id); - GEM_BUG_ON(id >= GUC_MAX_LRC_DESCRIPTORS); + GEM_BUG_ON(id >= guc->max_guc_ids); return ce; } @@ -291,8 +291,7 @@ static int guc_lrc_desc_pool_create(struct intel_guc *guc) u32 size; int ret; - size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) * - GUC_MAX_LRC_DESCRIPTORS); + size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) * guc->max_guc_ids); ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool, (void **)&guc->lrc_desc_pool_vaddr); if (ret) @@ -1060,7 +1059,7 @@ static void guc_submit_request(struct i915_request *rq) static int new_guc_id(struct intel_guc *guc) { return ida_simple_get(&guc->guc_ids, 0, - GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL | + guc->num_guc_ids, GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); } @@ -2529,6 +2528,7 @@ static bool __guc_submission_selected(struct intel_guc *guc) void intel_guc_submission_init_early(struct intel_guc *guc) { + guc->max_guc_ids = guc->num_guc_ids = GUC_MAX_LRC_DESCRIPTORS; guc->submission_supported = __guc_submission_supported(guc); guc->submission_selected = __guc_submission_selected(guc); } @@ -2538,7 +2538,7 @@ g2h_context_lookup(struct intel_guc *guc, u32 desc_idx) { struct intel_context *ce; - if (unlikely(desc_idx >= GUC_MAX_LRC_DESCRIPTORS)) { + if (unlikely(desc_idx >= guc->max_guc_ids)) { drm_err(&guc_to_gt(guc)->i915->drm, "Invalid desc_idx %u", desc_idx); return NULL; @@ -2841,6 +2841,8 @@ void intel_guc_submission_print_info(struct intel_guc *guc, drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n", atomic_read(&guc->outstanding_submission_g2h)); + drm_printf(p, "GuC Number GuC IDs: %u\n", guc->num_guc_ids); + drm_printf(p, "GuC Max GuC IDs: %u\n", guc->max_guc_ids); drm_printf(p, "GuC tasklet count: %u\n\n", atomic_read(&sched_engine->tasklet.count)); @@ -2880,7 +2882,6 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc, { struct intel_context *ce; unsigned long index; - xa_for_each(&guc->context_lookup, index, ce) { drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id); drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca); From patchwork Tue Jul 20 20:57:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 237E3C07E95 for ; Tue, 20 Jul 2021 20:40:39 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E969760FF3 for ; Tue, 20 Jul 2021 20:40:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E969760FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0B1D66E51B; Tue, 20 Jul 2021 20:40:22 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 672116E49F; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885362" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885362" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906057" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 03/42] drm/i915/guc: Connect the number of guc_ids to debugfs Date: Tue, 20 Jul 2021 13:57:23 -0700 Message-Id: <20210720205802.39610-4-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For testing purposes it may make sense to reduce the number of guc_ids available to be allocated. Add debugfs support for setting the number of guc_ids. Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gt/uc/intel_guc_debugfs.c | 31 +++++++++++++++++++ .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 +- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c index 72ddfff42f7d..7c479c5e7b3a 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.c @@ -50,11 +50,42 @@ static int guc_registered_contexts_show(struct seq_file *m, void *data) } DEFINE_GT_DEBUGFS_ATTRIBUTE(guc_registered_contexts); +static int guc_num_id_get(void *data, u64 *val) +{ + struct intel_guc *guc = data; + + if (!intel_guc_submission_is_used(guc)) + return -ENODEV; + + *val = guc->num_guc_ids; + + return 0; +} + +static int guc_num_id_set(void *data, u64 val) +{ + struct intel_guc *guc = data; + + if (!intel_guc_submission_is_used(guc)) + return -ENODEV; + + if (val > guc->max_guc_ids) + val = guc->max_guc_ids; + else if (val < 256) + val = 256; + + guc->num_guc_ids = val; + + return 0; +} +DEFINE_SIMPLE_ATTRIBUTE(guc_num_id_fops, guc_num_id_get, guc_num_id_set, "%lld\n"); + void intel_guc_debugfs_register(struct intel_guc *guc, struct dentry *root) { static const struct debugfs_gt_file files[] = { { "guc_info", &guc_info_fops, NULL }, { "guc_registered_contexts", &guc_registered_contexts_fops, NULL }, + { "guc_num_id", &guc_num_id_fops, NULL }, }; if (!intel_guc_is_supported(guc)) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 46a149d447e6..c6069fe2f23c 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -2540,7 +2540,8 @@ g2h_context_lookup(struct intel_guc *guc, u32 desc_idx) if (unlikely(desc_idx >= guc->max_guc_ids)) { drm_err(&guc_to_gt(guc)->i915->drm, - "Invalid desc_idx %u", desc_idx); + "Invalid desc_idx %u, max %u", + desc_idx, guc->max_guc_ids); return NULL; } From patchwork Tue Jul 20 20:57:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C3B4C07E95 for ; Tue, 20 Jul 2021 20:40:51 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EFFE960FEA for ; Tue, 20 Jul 2021 20:40:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EFFE960FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 800236E4E3; Tue, 20 Jul 2021 20:40:23 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8EC206E4DE; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885363" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885363" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906058" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 04/42] drm/i915/guc: Don't return -EAGAIN to user when guc_ids exhausted Date: Tue, 20 Jul 2021 13:57:24 -0700 Message-Id: <20210720205802.39610-5-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rather than returning -EAGAIN to the user when no guc_ids are available, implement a fair sharing algorithm in the kernel which blocks submissons until guc_ids become available. Submissions are released one at a time, based on priority, until the guc_id pressure is released to ensure fair sharing of the guc_ids. Once the pressure is fully released, the normal guc_id allocation (at request creation time in guc_request_alloc) can resume as this allocation path should be significantly faster and a fair sharing algorithm isn't needed when guc_ids are plentifully. The fair sharing algorithm is implemented by forcing all submissions to the tasklet which serializes submissions, dequeuing one at a time. If the submission doesn't have a guc_id and new guc_id can't be found, two lists are searched, one list with contexts that are not pinned but still registered with the guc (searched first) and another list with contexts that are pinned but do not have any submissions actively in inflight (scheduling enabled + registered, searched second). If no guc_ids can be found we kick a workqueue which will retire requests hopefully freeing a guc_id. The workqueue + tasklet ping / pong back and forth until a guc_id can be found. Once a guc_id is found, we may have to disable context scheduling depending on which list the context is stolen from. When we disable scheduling, we block the tasklet from executing until the completion G2H returns. The disable scheduling must be issued from the workqueue because of the locking structure. When we deregister a context, we also do the same thing (waiting on the G2H) but we can safely issue the deregister H2G from the tasklet. Once all the G2H have returned we can trigger a submission on the context. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context_types.h | 3 + drivers/gpu/drm/i915/gt/uc/intel_guc.h | 26 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 806 ++++++++++++++++-- drivers/gpu/drm/i915/i915_request.h | 6 + 4 files changed, 755 insertions(+), 86 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index fe555551c2d2..6008ac5aa0cf 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -185,6 +185,9 @@ struct intel_context { /* GuC LRC descriptor reference count */ atomic_t guc_id_ref; + /* Number of rq submitted without a guc_id */ + u16 guc_num_rq_submit_no_id; + /* * GuC ID link - in list when unpinned but guc_id still valid in GuC */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 8f4f44f71a39..cad4db2fd686 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -33,7 +33,28 @@ struct intel_guc { /* Global engine used to submit requests to GuC */ struct i915_sched_engine *sched_engine; - struct i915_request *stalled_request; + + /* Global state related to submission tasklet */ + struct i915_request *stalled_rq; + struct intel_context *stalled_context; + struct work_struct retire_worker; + unsigned long flags; + int total_num_rq_with_no_guc_id; + + /* + * Submisson stall reason. See intel_guc_submission.c for detailed + * description. + */ + enum { + STALL_NONE, + STALL_GUC_ID_WORKQUEUE, + STALL_GUC_ID_TASKLET, + STALL_SCHED_DISABLE, + STALL_REGISTER_CONTEXT, + STALL_DEREGISTER_CONTEXT, + STALL_MOVE_LRC_TAIL, + STALL_ADD_REQUEST, + } submission_stall_reason; /* intel_guc_recv interrupt related state */ spinlock_t irq_lock; @@ -55,7 +76,8 @@ struct intel_guc { struct ida guc_ids; u32 num_guc_ids; u32 max_guc_ids; - struct list_head guc_id_list; + struct list_head guc_id_list_no_ref; + struct list_head guc_id_list_unpinned; bool submission_supported; bool submission_selected; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index c6069fe2f23c..ec9b0e7eacc7 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -59,6 +59,25 @@ * ELSP context descriptor dword into Work Item. * See guc_add_request() * + * GuC flow control state machine: + * The tasklet, workqueue (retire_worker), and the G2H handlers together more or + * less form a state machine which is used to submit requests + flow control + * requests, while waiting on resources / actions, if necessary. The enum, + * submission_stall_reason, controls the handoff of stalls between these + * entities with stalled_rq & stalled_context being the arguments. Each state + * described below. + * + * STALL_NONE No stall condition + * STALL_GUC_ID_WORKQUEUE Workqueue will try to free guc_ids + * STALL_GUC_ID_TASKLET Tasklet will try to find guc_id + * STALL_SCHED_DISABLE Workqueue will issue context schedule disable + * H2G + * STALL_REGISTER_CONTEXT Tasklet needs to register context + * STALL_DEREGISTER_CONTEXT G2H handler is waiting for context deregister, + * will register context upon receipt of G2H + * STALL_MOVE_LRC_TAIL Tasklet will try to move LRC tail + * STALL_ADD_REQUEST Tasklet will try to add the request (submit + * context) */ /* GuC Virtual Engine */ @@ -72,6 +91,83 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count); #define GUC_REQUEST_SIZE 64 /* bytes */ +/* + * Global GuC flags helper functions + */ +enum { + GUC_STATE_TASKLET_BLOCKED, + GUC_STATE_GUC_IDS_EXHAUSTED, +}; + +static bool tasklet_blocked(struct intel_guc *guc) +{ + return test_bit(GUC_STATE_TASKLET_BLOCKED, &guc->flags); +} + +static void set_tasklet_blocked(struct intel_guc *guc) +{ + lockdep_assert_held(&guc->sched_engine->lock); + set_bit(GUC_STATE_TASKLET_BLOCKED, &guc->flags); +} + +static void __clr_tasklet_blocked(struct intel_guc *guc) +{ + lockdep_assert_held(&guc->sched_engine->lock); + clear_bit(GUC_STATE_TASKLET_BLOCKED, &guc->flags); +} + +static void clr_tasklet_blocked(struct intel_guc *guc) +{ + unsigned long flags; + + spin_lock_irqsave(&guc->sched_engine->lock, flags); + __clr_tasklet_blocked(guc); + spin_unlock_irqrestore(&guc->sched_engine->lock, flags); +} + +static bool guc_ids_exhausted(struct intel_guc *guc) +{ + return test_bit(GUC_STATE_GUC_IDS_EXHAUSTED, &guc->flags); +} + +static bool test_and_update_guc_ids_exhausted(struct intel_guc *guc) +{ + unsigned long flags; + bool ret = false; + + /* + * Strict ordering on checking if guc_ids are exhausted isn't required, + * so let's avoid grabbing the submission lock if possible. + */ + if (guc_ids_exhausted(guc)) { + spin_lock_irqsave(&guc->sched_engine->lock, flags); + ret = guc_ids_exhausted(guc); + if (ret) + ++guc->total_num_rq_with_no_guc_id; + spin_unlock_irqrestore(&guc->sched_engine->lock, flags); + } + + return ret; +} + +static void set_and_update_guc_ids_exhausted(struct intel_guc *guc) +{ + unsigned long flags; + + spin_lock_irqsave(&guc->sched_engine->lock, flags); + ++guc->total_num_rq_with_no_guc_id; + set_bit(GUC_STATE_GUC_IDS_EXHAUSTED, &guc->flags); + spin_unlock_irqrestore(&guc->sched_engine->lock, flags); +} + +static void clr_guc_ids_exhausted(struct intel_guc *guc) +{ + lockdep_assert_held(&guc->sched_engine->lock); + GEM_BUG_ON(guc->total_num_rq_with_no_guc_id); + + clear_bit(GUC_STATE_GUC_IDS_EXHAUSTED, &guc->flags); +} + /* * Below is a set of functions which control the GuC scheduling state which do * not require a lock as all state transitions are mutually exclusive. i.e. It @@ -82,7 +178,10 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count); #define SCHED_STATE_NO_LOCK_ENABLED BIT(0) #define SCHED_STATE_NO_LOCK_PENDING_ENABLE BIT(1) #define SCHED_STATE_NO_LOCK_REGISTERED BIT(2) -#define SCHED_STATE_NO_LOCK_BLOCKED_SHIFT 3 +#define SCHED_STATE_NO_LOCK_BLOCK_TASKLET BIT(3) +#define SCHED_STATE_NO_LOCK_GUC_ID_STOLEN BIT(4) +#define SCHED_STATE_NO_LOCK_NEEDS_REGISTER BIT(5) +#define SCHED_STATE_NO_LOCK_BLOCKED_SHIFT 6 #define SCHED_STATE_NO_LOCK_BLOCKED \ BIT(SCHED_STATE_NO_LOCK_BLOCKED_SHIFT) #define SCHED_STATE_NO_LOCK_BLOCKED_MASK \ @@ -161,6 +260,60 @@ static inline void clr_context_registered(struct intel_context *ce) &ce->guc_sched_state_no_lock); } +static inline bool context_block_tasklet(struct intel_context *ce) +{ + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_BLOCK_TASKLET); +} + +static inline void set_context_block_tasklet(struct intel_context *ce) +{ + atomic_or(SCHED_STATE_NO_LOCK_BLOCK_TASKLET, + &ce->guc_sched_state_no_lock); +} + +static inline void clr_context_block_tasklet(struct intel_context *ce) +{ + atomic_and((u32)~SCHED_STATE_NO_LOCK_BLOCK_TASKLET, + &ce->guc_sched_state_no_lock); +} + +static inline bool context_guc_id_stolen(struct intel_context *ce) +{ + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_GUC_ID_STOLEN); +} + +static inline void set_context_guc_id_stolen(struct intel_context *ce) +{ + atomic_or(SCHED_STATE_NO_LOCK_GUC_ID_STOLEN, + &ce->guc_sched_state_no_lock); +} + +static inline void clr_context_guc_id_stolen(struct intel_context *ce) +{ + atomic_and((u32)~SCHED_STATE_NO_LOCK_GUC_ID_STOLEN, + &ce->guc_sched_state_no_lock); +} + +static inline bool context_needs_register(struct intel_context *ce) +{ + return (atomic_read(&ce->guc_sched_state_no_lock) & + SCHED_STATE_NO_LOCK_NEEDS_REGISTER); +} + +static inline void set_context_needs_register(struct intel_context *ce) +{ + atomic_or(SCHED_STATE_NO_LOCK_NEEDS_REGISTER, + &ce->guc_sched_state_no_lock); +} + +static inline void clr_context_needs_register(struct intel_context *ce) +{ + atomic_and((u32)~SCHED_STATE_NO_LOCK_NEEDS_REGISTER, + &ce->guc_sched_state_no_lock); +} + /* * Below is a set of functions which control the GuC scheduling state which * require a lock, aside from the special case where the functions are called @@ -415,9 +568,12 @@ int intel_guc_wait_for_idle(struct intel_guc *guc, long timeout) true, timeout); } -static int guc_lrc_desc_pin(struct intel_context *ce, bool loop); +static inline bool request_has_no_guc_id(struct i915_request *rq) +{ + return test_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); +} -static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) +static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq) { int err = 0; struct intel_context *ce = rq->context; @@ -436,18 +592,15 @@ static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) goto out; } + /* Ensure context is in correct state before a submission */ + GEM_BUG_ON(ce->guc_num_rq_submit_no_id); + GEM_BUG_ON(request_has_no_guc_id(rq)); GEM_BUG_ON(!atomic_read(&ce->guc_id_ref)); + GEM_BUG_ON(context_needs_register(ce)); GEM_BUG_ON(context_guc_id_invalid(ce)); - - /* - * Corner case where the GuC firmware was blown away and reloaded while - * this context was pinned. - */ - if (unlikely(!lrc_desc_registered(guc, ce->guc_id))) { - err = guc_lrc_desc_pin(ce, false); - if (unlikely(err)) - goto out; - } + GEM_BUG_ON(context_pending_disable(ce)); + GEM_BUG_ON(context_wait_for_deregister_to_register(ce)); + GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id)); if (unlikely(context_blocked(ce))) goto out; @@ -455,6 +608,8 @@ static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) enabled = context_enabled(ce); if (!enabled) { + GEM_BUG_ON(context_pending_enable(ce)); + action[len++] = INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET; action[len++] = ce->guc_id; action[len++] = GUC_CONTEXT_ENABLE; @@ -482,6 +637,68 @@ static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) return err; } +static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) +{ + int ret; + + lockdep_assert_held(&guc->sched_engine->lock); + + ret = __guc_add_request(guc, rq); + if (ret == -EBUSY) { + guc->stalled_rq = rq; + guc->submission_stall_reason = STALL_ADD_REQUEST; + } else { + guc->stalled_rq = NULL; + guc->submission_stall_reason = STALL_NONE; + } + + return ret; +} + +static int guc_lrc_desc_pin(struct intel_context *ce, bool loop); + +static int tasklet_register_context(struct intel_guc *guc, + struct i915_request *rq) +{ + struct intel_context *ce = rq->context; + int ret = 0; + + /* Check state */ + lockdep_assert_held(&guc->sched_engine->lock); + GEM_BUG_ON(ce->guc_num_rq_submit_no_id); + GEM_BUG_ON(request_has_no_guc_id(rq)); + GEM_BUG_ON(context_guc_id_invalid(ce)); + GEM_BUG_ON(!atomic_read(&ce->guc_id_ref)); + + /* + * The guc_id is getting pinned during the tasklet and we need to + * register this context or a corner case where the GuC firwmare was + * blown away and reloaded while this context was pinned + */ + if (unlikely((!lrc_desc_registered(guc, ce->guc_id) || + context_needs_register(ce)) && + !intel_context_is_banned(ce))) { + GEM_BUG_ON(context_pending_disable(ce)); + GEM_BUG_ON(context_wait_for_deregister_to_register(ce)); + + ret = guc_lrc_desc_pin(ce, false); + + if (likely(ret != -EBUSY)) + clr_context_needs_register(ce); + + if (unlikely(ret == -EBUSY)) { + guc->stalled_rq = rq; + guc->submission_stall_reason = STALL_REGISTER_CONTEXT; + } else if (unlikely(ret == -EINPROGRESS)) { + guc->stalled_rq = rq; + guc->submission_stall_reason = STALL_DEREGISTER_CONTEXT; + } + } + + return ret; +} + + static inline void guc_set_lrc_tail(struct i915_request *rq) { rq->context->lrc_reg_state[CTX_RING_TAIL] = @@ -493,77 +710,142 @@ static inline int rq_prio(const struct i915_request *rq) return rq->sched.attr.priority; } +static void kick_retire_wq(struct intel_guc *guc) +{ + queue_work(system_unbound_wq, &guc->retire_worker); +} + +static int tasklet_pin_guc_id(struct intel_guc *guc, struct i915_request *rq); + static int guc_dequeue_one_context(struct intel_guc *guc) { struct i915_sched_engine * const sched_engine = guc->sched_engine; - struct i915_request *last = NULL; - bool submit = false; + struct i915_request *last = guc->stalled_rq; + bool submit = !!last; struct rb_node *rb; int ret; lockdep_assert_held(&sched_engine->lock); + GEM_BUG_ON(guc->stalled_context); + GEM_BUG_ON(!submit && guc->submission_stall_reason); - if (guc->stalled_request) { - submit = true; - last = guc->stalled_request; - goto resubmit; - } + if (submit) { + /* Flow control conditions */ + switch (guc->submission_stall_reason) { + case STALL_GUC_ID_TASKLET: + goto done; + case STALL_REGISTER_CONTEXT: + goto register_context; + case STALL_MOVE_LRC_TAIL: + goto move_lrc_tail; + case STALL_ADD_REQUEST: + goto add_request; + default: + GEM_BUG_ON("Invalid stall state"); + } + } else { + GEM_BUG_ON(!guc->total_num_rq_with_no_guc_id && + guc_ids_exhausted(guc)); - while ((rb = rb_first_cached(&sched_engine->queue))) { - struct i915_priolist *p = to_priolist(rb); - struct i915_request *rq, *rn; + while ((rb = rb_first_cached(&sched_engine->queue))) { + struct i915_priolist *p = to_priolist(rb); + struct i915_request *rq, *rn; - priolist_for_each_request_consume(rq, rn, p) { - if (last && rq->context != last->context) - goto done; + priolist_for_each_request_consume(rq, rn, p) { + if (last && rq->context != last->context) + goto done; - list_del_init(&rq->sched.link); + list_del_init(&rq->sched.link); - __i915_request_submit(rq); + __i915_request_submit(rq); - trace_i915_request_in(rq, 0); - last = rq; - submit = true; - } + trace_i915_request_in(rq, 0); + last = rq; + submit = true; + } - rb_erase_cached(&p->node, &sched_engine->queue); - i915_priolist_free(p); + rb_erase_cached(&p->node, &sched_engine->queue); + i915_priolist_free(p); + } } + done: if (submit) { + struct intel_context *ce = last->context; + + if (ce->guc_num_rq_submit_no_id) { + ret = tasklet_pin_guc_id(guc, last); + if (ret) + goto blk_tasklet_kick; + } + +register_context: + ret = tasklet_register_context(guc, last); + if (unlikely(ret == -EINPROGRESS)) + goto blk_tasklet; + else if (unlikely(ret == -EPIPE)) + goto deadlk; + else if (ret == -EBUSY) + goto schedule_tasklet; + else if (unlikely(ret != 0)) { + GEM_WARN_ON(ret); /* Unexpected */ + goto deadlk; + } + +move_lrc_tail: guc_set_lrc_tail(last); -resubmit: + +add_request: ret = guc_add_request(guc, last); if (unlikely(ret == -EPIPE)) goto deadlk; - else if (ret == -EBUSY) { - tasklet_schedule(&sched_engine->tasklet); - guc->stalled_request = last; - return false; + else if (ret == -EBUSY) + goto schedule_tasklet; + else if (unlikely(ret != 0)) { + GEM_WARN_ON(ret); /* Unexpected */ + goto deadlk; } } - guc->stalled_request = NULL; + /* + * No requests without a guc_id, enable guc_id allocation at request + * creation time (guc_request_alloc). + */ + if (!guc->total_num_rq_with_no_guc_id) + clr_guc_ids_exhausted(guc); + return submit; +schedule_tasklet: + tasklet_schedule(&sched_engine->tasklet); + return false; + deadlk: sched_engine->tasklet.callback = NULL; tasklet_disable_nosync(&sched_engine->tasklet); return false; + +blk_tasklet_kick: + kick_retire_wq(guc); +blk_tasklet: + set_tasklet_blocked(guc); + return false; } static void guc_submission_tasklet(struct tasklet_struct *t) { struct i915_sched_engine *sched_engine = from_tasklet(sched_engine, t, tasklet); + struct intel_guc *guc = sched_engine->private_data; unsigned long flags; bool loop; spin_lock_irqsave(&sched_engine->lock, flags); - do { - loop = guc_dequeue_one_context(sched_engine->private_data); - } while (loop); + if (likely(!tasklet_blocked(guc))) + do { + loop = guc_dequeue_one_context(guc); + } while (loop); i915_sched_engine_reset_on_empty(sched_engine); @@ -646,6 +928,14 @@ submission_disabled(struct intel_guc *guc) !__tasklet_is_enabled(&sched_engine->tasklet)); } +static void kick_tasklet(struct intel_guc *guc) +{ + struct i915_sched_engine * const sched_engine = guc->sched_engine; + + if (likely(!tasklet_blocked(guc))) + tasklet_hi_schedule(&sched_engine->tasklet); +} + static void disable_submission(struct intel_guc *guc) { struct i915_sched_engine * const sched_engine = guc->sched_engine; @@ -669,8 +959,16 @@ static void enable_submission(struct intel_guc *guc) __tasklet_enable(&sched_engine->tasklet)) { GEM_BUG_ON(!guc->ct.enabled); + /* Reset tasklet state */ + guc->stalled_rq = NULL; + if (guc->stalled_context) + intel_context_put(guc->stalled_context); + guc->stalled_context = NULL; + guc->submission_stall_reason = STALL_NONE; + guc->flags = 0; + /* And kick in case we missed a new request submission. */ - tasklet_hi_schedule(&sched_engine->tasklet); + kick_tasklet(guc); } spin_unlock_irqrestore(&guc->sched_engine->lock, flags); } @@ -848,6 +1146,7 @@ static void __guc_reset_context(struct intel_context *ce, bool stalled) out_replay: guc_reset_state(ce, head, stalled); __unwind_incomplete_requests(ce); + ce->guc_num_rq_submit_no_id = 0; intel_context_put(ce); } @@ -879,6 +1178,7 @@ static void guc_cancel_context_requests(struct intel_context *ce) spin_lock(&ce->guc_active.lock); list_for_each_entry(rq, &ce->guc_active.requests, sched.link) i915_request_put(i915_request_mark_eio(rq)); + ce->guc_num_rq_submit_no_id = 0; spin_unlock(&ce->guc_active.lock); spin_unlock_irqrestore(&sched_engine->lock, flags); } @@ -915,11 +1215,15 @@ guc_cancel_sched_engine_requests(struct i915_sched_engine *sched_engine) struct i915_priolist *p = to_priolist(rb); priolist_for_each_request_consume(rq, rn, p) { + struct intel_context *ce = rq->context; + list_del_init(&rq->sched.link); __i915_request_submit(rq); i915_request_put(i915_request_mark_eio(rq)); + + ce->guc_num_rq_submit_no_id = 0; } rb_erase_cached(&p->node, &sched_engine->queue); @@ -970,6 +1274,51 @@ void intel_guc_submission_reset_finish(struct intel_guc *guc) intel_gt_unpark_heartbeats(guc_to_gt(guc)); } +static void retire_worker_sched_disable(struct intel_guc *guc, + struct intel_context *ce); + +static void retire_worker_func(struct work_struct *w) +{ + struct intel_guc *guc = + container_of(w, struct intel_guc, retire_worker); + + /* + * It is possible that another thread issues the schedule disable + that + * G2H completes moving the state machine further along to a point + * where nothing needs to be done here. Let's be paranoid and kick the + * tasklet in that case. + */ + if (guc->submission_stall_reason != STALL_SCHED_DISABLE && + guc->submission_stall_reason != STALL_GUC_ID_WORKQUEUE) { + kick_tasklet(guc); + return; + } + + if (guc->submission_stall_reason == STALL_SCHED_DISABLE) { + GEM_BUG_ON(!guc->stalled_context); + GEM_BUG_ON(context_guc_id_invalid(guc->stalled_context)); + + retire_worker_sched_disable(guc, guc->stalled_context); + } + + /* + * guc_id pressure, always try to release it regardless of state, + * albeit after possibly issuing a schedule disable as that is async + * operation. + */ + intel_gt_retire_requests(guc_to_gt(guc)); + + if (guc->submission_stall_reason == STALL_GUC_ID_WORKQUEUE) { + GEM_BUG_ON(guc->stalled_context); + + /* Hopefully guc_ids are now available, kick tasklet */ + guc->submission_stall_reason = STALL_GUC_ID_TASKLET; + clr_tasklet_blocked(guc); + + kick_tasklet(guc); + } +} + /* * Set up the memory resources to be shared with the GuC (via the GGTT) * at firmware loading time. @@ -993,9 +1342,12 @@ int intel_guc_submission_init(struct intel_guc *guc) xa_init_flags(&guc->context_lookup, XA_FLAGS_LOCK_IRQ); spin_lock_init(&guc->contexts_lock); - INIT_LIST_HEAD(&guc->guc_id_list); + INIT_LIST_HEAD(&guc->guc_id_list_no_ref); + INIT_LIST_HEAD(&guc->guc_id_list_unpinned); ida_init(&guc->guc_ids); + INIT_WORK(&guc->retire_worker, retire_worker_func); + return 0; } @@ -1012,10 +1364,28 @@ static inline void queue_request(struct i915_sched_engine *sched_engine, struct i915_request *rq, int prio) { + bool empty = i915_sched_engine_is_empty(sched_engine); + GEM_BUG_ON(!list_empty(&rq->sched.link)); list_add_tail(&rq->sched.link, i915_sched_lookup_priolist(sched_engine, prio)); set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); + + if (empty) + kick_tasklet(&rq->engine->gt->uc.guc); +} + +static bool need_tasklet(struct intel_guc *guc, struct intel_context *ce) +{ + struct i915_sched_engine * const sched_engine = + ce->engine->sched_engine; + + lockdep_assert_held(&sched_engine->lock); + + return guc_ids_exhausted(guc) || submission_disabled(guc) || + guc->stalled_rq || guc->stalled_context || + !lrc_desc_registered(guc, ce->guc_id) || + !i915_sched_engine_is_empty(sched_engine); } static int guc_bypass_tasklet_submit(struct intel_guc *guc, @@ -1029,8 +1399,6 @@ static int guc_bypass_tasklet_submit(struct intel_guc *guc, guc_set_lrc_tail(rq); ret = guc_add_request(guc, rq); - if (ret == -EBUSY) - guc->stalled_request = rq; if (unlikely(ret == -EPIPE)) disable_submission(guc); @@ -1047,11 +1415,10 @@ static void guc_submit_request(struct i915_request *rq) /* Will be called from irq-context when using foreign fences. */ spin_lock_irqsave(&sched_engine->lock, flags); - if (submission_disabled(guc) || guc->stalled_request || - !i915_sched_engine_is_empty(sched_engine)) + if (need_tasklet(guc, rq->context)) queue_request(sched_engine, rq, rq_prio(rq)); else if (guc_bypass_tasklet_submit(guc, rq) == -EBUSY) - tasklet_hi_schedule(&sched_engine->tasklet); + kick_tasklet(guc); spin_unlock_irqrestore(&sched_engine->lock, flags); } @@ -1083,32 +1450,71 @@ static void release_guc_id(struct intel_guc *guc, struct intel_context *ce) spin_unlock_irqrestore(&guc->contexts_lock, flags); } -static int steal_guc_id(struct intel_guc *guc) +/* + * We have two lists for guc_ids available to steal. One list is for contexts + * that to have a zero guc_id_ref but are still pinned (scheduling enabled, only + * available inside tasklet) and the other is for contexts that are not pinned + * but still registered (available both outside and inside tasklet). Stealing + * from the latter only requires a deregister H2G, while the former requires a + * schedule disable H2G + a deregister H2G. + */ +static struct list_head *get_guc_id_list(struct intel_guc *guc, + bool unpinned) +{ + if (unpinned) + return &guc->guc_id_list_unpinned; + else + return &guc->guc_id_list_no_ref; +} + +static int steal_guc_id(struct intel_guc *guc, bool unpinned) { struct intel_context *ce; int guc_id; + struct list_head *guc_id_list = get_guc_id_list(guc, unpinned); lockdep_assert_held(&guc->contexts_lock); - if (!list_empty(&guc->guc_id_list)) { - ce = list_first_entry(&guc->guc_id_list, + if (!list_empty(guc_id_list)) { + ce = list_first_entry(guc_id_list, struct intel_context, guc_id_link); + /* Ensure context getting stolen in expected state */ GEM_BUG_ON(atomic_read(&ce->guc_id_ref)); GEM_BUG_ON(context_guc_id_invalid(ce)); + GEM_BUG_ON(context_guc_id_stolen(ce)); list_del_init(&ce->guc_id_link); guc_id = ce->guc_id; clr_context_registered(ce); - set_context_guc_id_invalid(ce); + + /* + * If stealing from the pinned list, defer invalidating + * the guc_id until the retire workqueue processes this + * context. + */ + if (!unpinned) { + GEM_BUG_ON(guc->stalled_context); + guc->stalled_context = intel_context_get(ce); + set_context_guc_id_stolen(ce); + } else { + set_context_guc_id_invalid(ce); + } + return guc_id; } else { return -EAGAIN; } } -static int assign_guc_id(struct intel_guc *guc, u16 *out) +enum { /* Return values for pin_guc_id / assign_guc_id */ + SAME_GUC_ID =0, + NEW_GUC_ID_DISABLED =1, + NEW_GUC_ID_ENABLED =2, +}; + +static int assign_guc_id(struct intel_guc *guc, u16 *out, bool tasklet) { int ret; @@ -1116,17 +1522,33 @@ static int assign_guc_id(struct intel_guc *guc, u16 *out) ret = new_guc_id(guc); if (unlikely(ret < 0)) { - ret = steal_guc_id(guc); - if (ret < 0) - return ret; + ret = steal_guc_id(guc, true); + if (ret >= 0) { + *out = ret; + ret = NEW_GUC_ID_DISABLED; + } else if (ret < 0 && tasklet) { + /* + * We only steal a guc_id from a context with scheduling + * enabled if guc_ids are exhausted and we are submitting + * from the tasklet. + */ + ret = steal_guc_id(guc, false); + if (ret >= 0) { + *out = ret; + ret = NEW_GUC_ID_ENABLED; + } + } + } else { + *out = ret; + ret = SAME_GUC_ID; } - *out = ret; - return 0; + return ret; } #define PIN_GUC_ID_TRIES 4 -static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce) +static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce, + bool tasklet) { int ret = 0; unsigned long flags, tries = PIN_GUC_ID_TRIES; @@ -1136,11 +1558,15 @@ static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce) try_again: spin_lock_irqsave(&guc->contexts_lock, flags); + if (!tasklet && guc_ids_exhausted(guc)) { + ret = -EAGAIN; + goto out_unlock; + } + if (context_guc_id_invalid(ce)) { - ret = assign_guc_id(guc, &ce->guc_id); - if (ret) + ret = assign_guc_id(guc, &ce->guc_id, tasklet); + if (unlikely(ret < 0)) goto out_unlock; - ret = 1; /* Indidcates newly assigned guc_id */ } if (!list_empty(&ce->guc_id_link)) list_del_init(&ce->guc_id_link); @@ -1156,8 +1582,11 @@ static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce) * attempting to retire more requests. Double the sleep period each * subsequent pass before finally giving up. The sleep period has max of * 100ms and minimum of 1ms. + * + * We only try this if outside the tasklet, inside the tasklet we have a + * (slower, more complex, blocking) different flow control algorithm. */ - if (ret == -EAGAIN && --tries) { + if (ret == -EAGAIN && --tries && !tasklet) { if (PIN_GUC_ID_TRIES - tries > 1) { unsigned int timeslice_shifted = ce->engine->props.timeslice_duration_ms << @@ -1174,16 +1603,26 @@ static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce) return ret; } -static void unpin_guc_id(struct intel_guc *guc, struct intel_context *ce) +static void unpin_guc_id(struct intel_guc *guc, + struct intel_context *ce, + bool unpinned) { unsigned long flags; GEM_BUG_ON(atomic_read(&ce->guc_id_ref) < 0); spin_lock_irqsave(&guc->contexts_lock, flags); - if (!context_guc_id_invalid(ce) && list_empty(&ce->guc_id_link) && - !atomic_read(&ce->guc_id_ref)) - list_add_tail(&ce->guc_id_link, &guc->guc_id_list); + + if (!list_empty(&ce->guc_id_link)) + list_del_init(&ce->guc_id_link); + + if (!context_guc_id_invalid(ce) && !context_guc_id_stolen(ce) && + !atomic_read(&ce->guc_id_ref)) { + struct list_head *head = get_guc_id_list(guc, unpinned); + + list_add_tail(&ce->guc_id_link, head); + } + spin_unlock_irqrestore(&guc->contexts_lock, flags); } @@ -1285,6 +1724,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) int ret = 0; GEM_BUG_ON(!engine->mask); + GEM_BUG_ON(context_guc_id_invalid(ce)); /* * Ensure LRC + CT vmas are is same region as write barrier is done @@ -1327,6 +1767,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) trace_intel_context_steal_guc_id(ce); if (!loop) { set_context_wait_for_deregister_to_register(ce); + set_context_block_tasklet(ce); intel_context_get(ce); } else { bool disabled; @@ -1354,7 +1795,14 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) ret = deregister_context(ce, ce->guc_id, loop); if (unlikely(ret == -EBUSY)) { clr_context_wait_for_deregister_to_register(ce); + clr_context_block_tasklet(ce); intel_context_put(ce); + } else if (!loop && !ret) { + /* + * A context de-registration has been issued from within + * the tasklet. Need to block until it complete. + */ + return -EINPROGRESS; } else if (unlikely(ret == -ENODEV)) ret = 0; /* Will get registered later */ } else { @@ -1409,7 +1857,9 @@ static void guc_context_unpin(struct intel_context *ce) { struct intel_guc *guc = ce_to_guc(ce); - unpin_guc_id(guc, ce); + GEM_BUG_ON(context_enabled(ce)); + + unpin_guc_id(guc, ce, true); lrc_unpin(ce); } @@ -1736,6 +2186,8 @@ static void guc_context_destroy(struct kref *kref) unsigned long flags; bool disabled; + GEM_BUG_ON(context_guc_id_stolen(ce)); + /* * If the guc_id is invalid this context has been stolen and we can free * it immediately. Also can be freed immediately if the context is not @@ -1894,6 +2346,9 @@ static void add_to_context(struct i915_request *rq) spin_lock(&ce->guc_active.lock); list_move_tail(&rq->sched.link, &ce->guc_active.requests); + if (unlikely(request_has_no_guc_id(rq))) + ++ce->guc_num_rq_submit_no_id; + if (rq->guc_prio == GUC_PRIO_INIT) { rq->guc_prio = new_guc_prio; add_context_inflight_prio(ce, rq->guc_prio); @@ -1933,7 +2388,12 @@ static void remove_from_context(struct i915_request *rq) spin_unlock_irq(&ce->guc_active.lock); - atomic_dec(&ce->guc_id_ref); + if (likely(!request_has_no_guc_id(rq))) + atomic_dec(&ce->guc_id_ref); + else + --ce_to_guc(rq->context)->total_num_rq_with_no_guc_id; + unpin_guc_id(ce_to_guc(ce), ce, false); + i915_request_notify_execute_cb_imm(rq); } @@ -1985,13 +2445,144 @@ static void guc_signal_context_fence(struct intel_context *ce) spin_unlock_irqrestore(&ce->guc_state.lock, flags); } -static bool context_needs_register(struct intel_context *ce, bool new_guc_id) +static void invalidate_guc_id_sched_disable(struct intel_context *ce) +{ + set_context_guc_id_invalid(ce); + wmb(); + clr_context_guc_id_stolen(ce); +} + +static void retire_worker_sched_disable(struct intel_guc *guc, + struct intel_context *ce) +{ + unsigned long flags; + bool disabled; + + guc->stalled_context = NULL; + spin_lock_irqsave(&ce->guc_state.lock, flags); + disabled = submission_disabled(guc); + if (!disabled && !context_pending_disable(ce) && context_enabled(ce)) { + /* + * Still enabled, issue schedule disable + configure state so + * when G2H returns tasklet is kicked. + */ + + struct intel_runtime_pm *runtime_pm = + &ce->engine->gt->i915->runtime_pm; + intel_wakeref_t wakeref; + u16 guc_id; + + /* + * We add +2 here as the schedule disable complete CTB handler + * calls intel_context_sched_disable_unpin (-2 to pin_count). + */ + GEM_BUG_ON(!atomic_read(&ce->pin_count)); + atomic_add(2, &ce->pin_count); + + set_context_block_tasklet(ce); + guc_id = prep_context_pending_disable(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + with_intel_runtime_pm(runtime_pm, wakeref) + __guc_context_sched_disable(guc, ce, guc_id); + + invalidate_guc_id_sched_disable(ce); + } else if (!disabled && context_pending_disable(ce)) { + /* + * Schedule disable in flight, set bit to kick tasklet in G2H + * handler and call it a day. + */ + + set_context_block_tasklet(ce); + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + invalidate_guc_id_sched_disable(ce); + } else { + /* Schedule disable is done, kick tasklet */ + + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + + invalidate_guc_id_sched_disable(ce); + + guc->submission_stall_reason = STALL_REGISTER_CONTEXT; + clr_tasklet_blocked(guc); + + kick_tasklet(ce_to_guc(ce)); + } + + intel_context_put(ce); +} + +static bool context_needs_lrc_desc_pin(struct intel_context *ce, bool new_guc_id) { return (new_guc_id || test_bit(CONTEXT_LRCA_DIRTY, &ce->flags) || !lrc_desc_registered(ce_to_guc(ce), ce->guc_id)) && !submission_disabled(ce_to_guc(ce)); } +static int tasklet_pin_guc_id(struct intel_guc *guc, struct i915_request *rq) +{ + struct intel_context *ce = rq->context; + int ret = 0; + + lockdep_assert_held(&guc->sched_engine->lock); + GEM_BUG_ON(!ce->guc_num_rq_submit_no_id); + + if (atomic_add_unless(&ce->guc_id_ref, ce->guc_num_rq_submit_no_id, 0)) + goto out; + + ret = pin_guc_id(guc, ce, true); + if (unlikely(ret < 0)) { + /* + * No guc_ids available, disable the tasklet and kick the retire + * workqueue hopefully freeing up some guc_ids. + */ + guc->stalled_rq = rq; + guc->submission_stall_reason = STALL_GUC_ID_WORKQUEUE; + return ret; + } + + if (ce->guc_num_rq_submit_no_id - 1 > 0) + atomic_add(ce->guc_num_rq_submit_no_id - 1, + &ce->guc_id_ref); + + if (context_needs_lrc_desc_pin(ce, !!ret)) + set_context_needs_register(ce); + + if (ret == NEW_GUC_ID_ENABLED) { + guc->stalled_rq = rq; + guc->submission_stall_reason = STALL_SCHED_DISABLE; + } + + clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); +out: + guc->total_num_rq_with_no_guc_id -= ce->guc_num_rq_submit_no_id; + GEM_BUG_ON(guc->total_num_rq_with_no_guc_id < 0); + + list_for_each_entry_reverse(rq, &ce->guc_active.requests, sched.link) + if (request_has_no_guc_id(rq)) { + --ce->guc_num_rq_submit_no_id; + clear_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, + &rq->fence.flags); + } else if (!ce->guc_num_rq_submit_no_id) { + break; + } + + GEM_BUG_ON(ce->guc_num_rq_submit_no_id); + + /* + * When NEW_GUC_ID_ENABLED is returned it means we are stealing a guc_id + * from a context that has scheduling enabled. We have to disable + * scheduling before deregistering the context and it isn't safe to do + * in the tasklet because of lock inversion (ce->guc_state.lock must be + * acquired before guc->sched_engine->lock). To work around this + * we do the schedule disable in retire workqueue and block the tasklet + * until the schedule done G2H returns. Returning non-zero here kicks + * the workqueue. + */ + return (ret == NEW_GUC_ID_ENABLED) ? ret : 0; +} + static int guc_request_alloc(struct i915_request *rq) { struct intel_context *ce = rq->context; @@ -2001,6 +2592,15 @@ static int guc_request_alloc(struct i915_request *rq) GEM_BUG_ON(!intel_context_is_pinned(rq->context)); + /* + * guc_ids are exhausted, don't allocate one here, defer to submission + * in the tasklet. + */ + if (test_and_update_guc_ids_exhausted(guc)) { + set_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); + goto out; + } + /* * Flush enough space to reduce the likelihood of waiting after * we start building the request - in which case we will just @@ -2030,9 +2630,7 @@ static int guc_request_alloc(struct i915_request *rq) * when guc_ids are being stolen due to over subscription. By the time * this function is reached, it is guaranteed that the guc_id will be * persistent until the generated request is retired. Thus, sealing these - * race conditions. It is still safe to fail here if guc_ids are - * exhausted and return -EAGAIN to the user indicating that they can try - * again in the future. + * race conditions. * * There is no need for a lock here as the timeline mutex ensures at * most one context can be executing this code path at once. The @@ -2043,10 +2641,26 @@ static int guc_request_alloc(struct i915_request *rq) if (atomic_add_unless(&ce->guc_id_ref, 1, 0)) goto out; - ret = pin_guc_id(guc, ce); /* returns 1 if new guc_id assigned */ - if (unlikely(ret < 0)) + ret = pin_guc_id(guc, ce, false); /* > 0 indicates new guc_id */ + if (unlikely(ret == -EAGAIN)) { + /* + * No guc_ids available, so we force this submission and all + * future submissions to be serialized in the tasklet, sharing + * the guc_ids on a per submission basis to ensure (more) fair + * scheduling of submissions. Once the tasklet is flushed of + * submissions we return to allocating guc_ids in this function. + */ + set_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); + set_and_update_guc_ids_exhausted(guc); + + return 0; + } else if (unlikely(ret < 0)) { return ret; - if (context_needs_register(ce, !!ret)) { + } + + GEM_BUG_ON(ret == NEW_GUC_ID_ENABLED); + + if (context_needs_lrc_desc_pin(ce, !!ret)) { ret = guc_lrc_desc_pin(ce, true); if (unlikely(ret)) { /* unwind */ if (ret == -EPIPE) { @@ -2054,7 +2668,7 @@ static int guc_request_alloc(struct i915_request *rq) goto out; /* GPU will be reset */ } atomic_dec(&ce->guc_id_ref); - unpin_guc_id(guc, ce); + unpin_guc_id(guc, ce, true); return ret; } } @@ -2325,7 +2939,7 @@ static inline void guc_kernel_context_pin(struct intel_guc *guc, struct intel_context *ce) { if (context_guc_id_invalid(ce)) - pin_guc_id(guc, ce); + pin_guc_id(guc, ce, false); guc_lrc_desc_pin(ce, true); } @@ -2591,6 +3205,16 @@ int intel_guc_deregister_done_process_msg(struct intel_guc *guc, with_intel_runtime_pm(runtime_pm, wakeref) register_context(ce, true); guc_signal_context_fence(ce); + if (context_block_tasklet(ce)) { + GEM_BUG_ON(guc->submission_stall_reason != + STALL_DEREGISTER_CONTEXT); + + clr_context_block_tasklet(ce); + guc->submission_stall_reason = STALL_MOVE_LRC_TAIL; + clr_tasklet_blocked(guc); + + kick_tasklet(ce_to_guc(ce)); + } intel_context_put(ce); } else if (context_destroyed(ce)) { /* Context has been destroyed */ @@ -2654,6 +3278,14 @@ int intel_guc_sched_done_process_msg(struct intel_guc *guc, guc_blocked_fence_complete(ce); spin_unlock_irqrestore(&ce->guc_state.lock, flags); + if (context_block_tasklet(ce)) { + clr_context_block_tasklet(ce); + guc->submission_stall_reason = STALL_REGISTER_CONTEXT; + clr_tasklet_blocked(guc); + + kick_tasklet(ce_to_guc(ce)); + } + if (banned) { guc_cancel_context_requests(ce); intel_engine_signal_breadcrumbs(ce->engine); @@ -2682,10 +3314,8 @@ static void capture_error_state(struct intel_guc *guc, static void guc_context_replay(struct intel_context *ce) { - struct i915_sched_engine *sched_engine = ce->engine->sched_engine; - __guc_reset_context(ce, true); - tasklet_hi_schedule(&sched_engine->tasklet); + kick_tasklet(ce_to_guc(ce)); } static void guc_handle_context_reset(struct intel_guc *guc, @@ -2844,8 +3474,16 @@ void intel_guc_submission_print_info(struct intel_guc *guc, atomic_read(&guc->outstanding_submission_g2h)); drm_printf(p, "GuC Number GuC IDs: %u\n", guc->num_guc_ids); drm_printf(p, "GuC Max GuC IDs: %u\n", guc->max_guc_ids); - drm_printf(p, "GuC tasklet count: %u\n\n", + drm_printf(p, "GuC tasklet count: %u\n", atomic_read(&sched_engine->tasklet.count)); + drm_printf(p, "GuC submit flags: 0x%04lx\n", guc->flags); + drm_printf(p, "GuC total number request without guc_id: %d\n", + guc->total_num_rq_with_no_guc_id); + drm_printf(p, "GuC stall reason: %d\n", guc->submission_stall_reason); + drm_printf(p, "GuC stalled request: %s\n", + yesno(guc->stalled_rq)); + drm_printf(p, "GuC stalled context: %s\n\n", + yesno(guc->stalled_context)); spin_lock_irqsave(&sched_engine->lock, flags); drm_printf(p, "Requests in GuC submit tasklet:\n"); diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index f0463d19c712..5f304fd02071 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -139,6 +139,12 @@ enum { * the GPU. Here we track such boost requests on a per-request basis. */ I915_FENCE_FLAG_BOOST, + + /* + * I915_FENCE_FLAG_GUC_ID_NOT_PINNED - Set to signal the GuC submission + * tasklet that the guc_id isn't pinned. + */ + I915_FENCE_FLAG_GUC_ID_NOT_PINNED, }; /** From patchwork Tue Jul 20 20:57:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF17EC636C8 for ; Tue, 20 Jul 2021 20:40:42 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 917C260FF3 for ; Tue, 20 Jul 2021 20:40:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 917C260FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4EC526E514; Tue, 20 Jul 2021 20:40:21 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id C92236E49F; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885364" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885364" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906060" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 05/42] drm/i915/guc: Don't allow requests not ready to consume all guc_ids Date: Tue, 20 Jul 2021 13:57:25 -0700 Message-Id: <20210720205802.39610-6-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add a heuristic which checks if over half of the available guc_ids are currently consumed by requests not ready to be submitted. If this heuristic is true at request creation time (normal guc_id allocation location) force all submissions + guc_ids allocations to tasklet. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context_types.h | 3 ++ drivers/gpu/drm/i915/gt/intel_reset.c | 9 ++++ drivers/gpu/drm/i915/gt/uc/intel_guc.h | 1 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 53 +++++++++++++++++-- .../gpu/drm/i915/gt/uc/intel_guc_submission.h | 2 + 5 files changed, 65 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 6008ac5aa0cf..7536129c9a5a 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -188,6 +188,9 @@ struct intel_context { /* Number of rq submitted without a guc_id */ u16 guc_num_rq_submit_no_id; + /* GuC number of requests not ready */ + atomic_t guc_num_rq_not_ready; + /* * GuC ID link - in list when unpinned but guc_id still valid in GuC */ diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c index 3ed694cab5af..880ca753fdd8 100644 --- a/drivers/gpu/drm/i915/gt/intel_reset.c +++ b/drivers/gpu/drm/i915/gt/intel_reset.c @@ -22,6 +22,7 @@ #include "intel_reset.h" #include "uc/intel_guc.h" +#include "uc/intel_guc_submission.h" #define RESET_MAX_RETRIES 3 @@ -844,6 +845,14 @@ static void nop_submit_request(struct i915_request *request) { RQ_TRACE(request, "-EIO\n"); + /* + * XXX: Kinda ugly to check for GuC submission here but this function is + * going away once we switch to the DRM scheduler so we can live with + * this for now. + */ + if (intel_engine_uses_guc(request->engine)) + intel_guc_decr_num_rq_not_ready(request->context); + request = i915_request_mark_eio(request); if (request) { i915_request_submit(request); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index cad4db2fd686..74481c7cf349 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -76,6 +76,7 @@ struct intel_guc { struct ida guc_ids; u32 num_guc_ids; u32 max_guc_ids; + atomic_t num_guc_ids_not_ready; struct list_head guc_id_list_no_ref; struct list_head guc_id_list_unpinned; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index ec9b0e7eacc7..6b1e553a0e99 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1375,6 +1375,41 @@ static inline void queue_request(struct i915_sched_engine *sched_engine, kick_tasklet(&rq->engine->gt->uc.guc); } +/* Macro to tweak heuristic, using a simple over 50% not ready for now */ +#define TOO_MANY_GUC_IDS_NOT_READY(avail, consumed) \ + (consumed > avail / 2) +static bool too_many_guc_ids_not_ready(struct intel_guc *guc, + struct intel_context *ce) +{ + u32 available_guc_ids, guc_ids_consumed; + + available_guc_ids = guc->num_guc_ids; + guc_ids_consumed = atomic_read(&guc->num_guc_ids_not_ready); + + if (TOO_MANY_GUC_IDS_NOT_READY(available_guc_ids, guc_ids_consumed)) { + set_and_update_guc_ids_exhausted(guc); + return true; + } + + return false; +} + +static void incr_num_rq_not_ready(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + + if (!atomic_fetch_add(1, &ce->guc_num_rq_not_ready)) + atomic_inc(&guc->num_guc_ids_not_ready); +} + +void intel_guc_decr_num_rq_not_ready(struct intel_context *ce) +{ + struct intel_guc *guc = ce_to_guc(ce); + + if (atomic_fetch_add(-1, &ce->guc_num_rq_not_ready) == 1) + atomic_dec(&guc->num_guc_ids_not_ready); +} + static bool need_tasklet(struct intel_guc *guc, struct intel_context *ce) { struct i915_sched_engine * const sched_engine = @@ -1421,6 +1456,8 @@ static void guc_submit_request(struct i915_request *rq) kick_tasklet(guc); spin_unlock_irqrestore(&sched_engine->lock, flags); + + intel_guc_decr_num_rq_not_ready(rq->context); } static int new_guc_id(struct intel_guc *guc) @@ -2593,10 +2630,13 @@ static int guc_request_alloc(struct i915_request *rq) GEM_BUG_ON(!intel_context_is_pinned(rq->context)); /* - * guc_ids are exhausted, don't allocate one here, defer to submission - * in the tasklet. + * guc_ids are exhausted or a heuristic is met indicating too many + * guc_ids are waiting on requests with submission dependencies (not + * ready to submit). Don't allocate one here, defer to submission in the + * tasklet. */ - if (test_and_update_guc_ids_exhausted(guc)) { + if (test_and_update_guc_ids_exhausted(guc) || + too_many_guc_ids_not_ready(guc, ce)) { set_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); goto out; } @@ -2652,6 +2692,7 @@ static int guc_request_alloc(struct i915_request *rq) */ set_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); set_and_update_guc_ids_exhausted(guc); + incr_num_rq_not_ready(ce); return 0; } else if (unlikely(ret < 0)) { @@ -2676,6 +2717,8 @@ static int guc_request_alloc(struct i915_request *rq) clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); out: + incr_num_rq_not_ready(ce); + /* * We block all requests on this context if a G2H is pending for a * schedule disable or context deregistration as the GuC will fail a @@ -3479,6 +3522,8 @@ void intel_guc_submission_print_info(struct intel_guc *guc, drm_printf(p, "GuC submit flags: 0x%04lx\n", guc->flags); drm_printf(p, "GuC total number request without guc_id: %d\n", guc->total_num_rq_with_no_guc_id); + drm_printf(p, "GuC Number GuC IDs not ready: %d\n", + atomic_read(&guc->num_guc_ids_not_ready)); drm_printf(p, "GuC stall reason: %d\n", guc->submission_stall_reason); drm_printf(p, "GuC stalled request: %s\n", yesno(guc->stalled_rq)); @@ -3534,6 +3579,8 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc, atomic_read(&ce->pin_count)); drm_printf(p, "\t\tGuC ID Ref Count: %u\n", atomic_read(&ce->guc_id_ref)); + drm_printf(p, "\t\tNumber Requests Not Ready: %u\n", + atomic_read(&ce->guc_num_rq_not_ready)); drm_printf(p, "\t\tSchedule State: 0x%x, 0x%x\n\n", ce->guc_state.sched_state, atomic_read(&ce->guc_sched_state_no_lock)); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h index c7ef44fa0c36..17af5e123b09 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h @@ -51,4 +51,6 @@ static inline bool intel_guc_submission_is_used(struct intel_guc *guc) return intel_guc_is_used(guc) && intel_guc_submission_is_wanted(guc); } +void intel_guc_decr_num_rq_not_ready(struct intel_context *ce); + #endif From patchwork Tue Jul 20 20:57:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED871C07E9B for ; Tue, 20 Jul 2021 20:40:40 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C195D60FF3 for ; Tue, 20 Jul 2021 20:40:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C195D60FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 523AF6E54C; Tue, 20 Jul 2021 20:40:22 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id F05FF6E4F1; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885365" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885365" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906061" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 06/42] drm/i915/guc: Introduce guc_submit_engine object Date: Tue, 20 Jul 2021 13:57:26 -0700 Message-Id: <20210720205802.39610-7-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move fields related to controlling the GuC submission state machine to a unique object (guc_submit_engine) rather than the global GuC state (intel_guc). This encapsulation allows multiple instances of submission objects to operate in parallel and a single instance can block if needed while another can make forward progress. This is analogous to how the execlist mode works assigning a schedule object per physical engine but rather in GuC mode we assign a schedule object based on the blocking dependencies. The guc_submit_engine object also encapsulates the i915_sched_engine object as well. Lots of find-replace. Currently only 1 guc_submit_engine instantiated, future patches will instantiate more. Signed-off-by: Matthew Brost Cc: John Harrison --- drivers/gpu/drm/i915/gt/uc/intel_guc.h | 33 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 555 +++++++++++------- .../i915/gt/uc/intel_guc_submission_types.h | 52 ++ drivers/gpu/drm/i915/i915_scheduler.c | 22 +- drivers/gpu/drm/i915/i915_scheduler.h | 3 + 5 files changed, 409 insertions(+), 256 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 74481c7cf349..e278ad376986 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -21,6 +21,11 @@ struct __guc_ads_blob; +enum { + GUC_SUBMIT_ENGINE_SINGLE_LRC, + GUC_SUBMIT_ENGINE_MAX +}; + /* * Top level structure of GuC. It handles firmware loading and manages client * pool. intel_guc owns a intel_guc_client to replace the legacy ExecList @@ -31,31 +36,6 @@ struct intel_guc { struct intel_guc_log log; struct intel_guc_ct ct; - /* Global engine used to submit requests to GuC */ - struct i915_sched_engine *sched_engine; - - /* Global state related to submission tasklet */ - struct i915_request *stalled_rq; - struct intel_context *stalled_context; - struct work_struct retire_worker; - unsigned long flags; - int total_num_rq_with_no_guc_id; - - /* - * Submisson stall reason. See intel_guc_submission.c for detailed - * description. - */ - enum { - STALL_NONE, - STALL_GUC_ID_WORKQUEUE, - STALL_GUC_ID_TASKLET, - STALL_SCHED_DISABLE, - STALL_REGISTER_CONTEXT, - STALL_DEREGISTER_CONTEXT, - STALL_MOVE_LRC_TAIL, - STALL_ADD_REQUEST, - } submission_stall_reason; - /* intel_guc_recv interrupt related state */ spinlock_t irq_lock; unsigned int msg_enabled_mask; @@ -68,6 +48,8 @@ struct intel_guc { void (*disable)(struct intel_guc *guc); } interrupts; + struct guc_submit_engine *gse[GUC_SUBMIT_ENGINE_MAX]; + /* * contexts_lock protects the pool of free guc ids and a linked list of * guc ids available to be stolen @@ -76,7 +58,6 @@ struct intel_guc { struct ida guc_ids; u32 num_guc_ids; u32 max_guc_ids; - atomic_t num_guc_ids_not_ready; struct list_head guc_id_list_no_ref; struct list_head guc_id_list_unpinned; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 6b1e553a0e99..fce5c1d8cfda 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -21,6 +21,7 @@ #include "gt/intel_ring.h" #include "intel_guc_submission.h" +#include "intel_guc_submission_types.h" #include "i915_drv.h" #include "i915_trace.h" @@ -57,7 +58,7 @@ * WQ_TYPE_INORDER is needed to support legacy submission via GuC, which * represents in-order queue. The kernel driver packs ring tail pointer and an * ELSP context descriptor dword into Work Item. - * See guc_add_request() + * See gse_add_request() * * GuC flow control state machine: * The tasklet, workqueue (retire_worker), and the G2H handlers together more or @@ -80,57 +81,57 @@ * context) */ -/* GuC Virtual Engine */ -struct guc_virtual_engine { - struct intel_engine_cs base; - struct intel_context context; -}; - static struct intel_context * guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count); #define GUC_REQUEST_SIZE 64 /* bytes */ +static inline struct guc_submit_engine *ce_to_gse(struct intel_context *ce) +{ + return container_of(ce->engine->sched_engine, struct guc_submit_engine, + sched_engine); +} + /* * Global GuC flags helper functions */ enum { - GUC_STATE_TASKLET_BLOCKED, - GUC_STATE_GUC_IDS_EXHAUSTED, + GSE_STATE_TASKLET_BLOCKED, + GSE_STATE_GUC_IDS_EXHAUSTED, }; -static bool tasklet_blocked(struct intel_guc *guc) +static bool tasklet_blocked(struct guc_submit_engine *gse) { - return test_bit(GUC_STATE_TASKLET_BLOCKED, &guc->flags); + return test_bit(GSE_STATE_TASKLET_BLOCKED, &gse->flags); } -static void set_tasklet_blocked(struct intel_guc *guc) +static void set_tasklet_blocked(struct guc_submit_engine *gse) { - lockdep_assert_held(&guc->sched_engine->lock); - set_bit(GUC_STATE_TASKLET_BLOCKED, &guc->flags); + lockdep_assert_held(&gse->sched_engine.lock); + set_bit(GSE_STATE_TASKLET_BLOCKED, &gse->flags); } -static void __clr_tasklet_blocked(struct intel_guc *guc) +static void __clr_tasklet_blocked(struct guc_submit_engine *gse) { - lockdep_assert_held(&guc->sched_engine->lock); - clear_bit(GUC_STATE_TASKLET_BLOCKED, &guc->flags); + lockdep_assert_held(&gse->sched_engine.lock); + clear_bit(GSE_STATE_TASKLET_BLOCKED, &gse->flags); } -static void clr_tasklet_blocked(struct intel_guc *guc) +static void clr_tasklet_blocked(struct guc_submit_engine *gse) { unsigned long flags; - spin_lock_irqsave(&guc->sched_engine->lock, flags); - __clr_tasklet_blocked(guc); - spin_unlock_irqrestore(&guc->sched_engine->lock, flags); + spin_lock_irqsave(&gse->sched_engine.lock, flags); + __clr_tasklet_blocked(gse); + spin_unlock_irqrestore(&gse->sched_engine.lock, flags); } -static bool guc_ids_exhausted(struct intel_guc *guc) +static bool guc_ids_exhausted(struct guc_submit_engine *gse) { - return test_bit(GUC_STATE_GUC_IDS_EXHAUSTED, &guc->flags); + return test_bit(GSE_STATE_GUC_IDS_EXHAUSTED, &gse->flags); } -static bool test_and_update_guc_ids_exhausted(struct intel_guc *guc) +static bool test_and_update_guc_ids_exhausted(struct guc_submit_engine *gse) { unsigned long flags; bool ret = false; @@ -139,33 +140,33 @@ static bool test_and_update_guc_ids_exhausted(struct intel_guc *guc) * Strict ordering on checking if guc_ids are exhausted isn't required, * so let's avoid grabbing the submission lock if possible. */ - if (guc_ids_exhausted(guc)) { - spin_lock_irqsave(&guc->sched_engine->lock, flags); - ret = guc_ids_exhausted(guc); + if (guc_ids_exhausted(gse)) { + spin_lock_irqsave(&gse->sched_engine.lock, flags); + ret = guc_ids_exhausted(gse); if (ret) - ++guc->total_num_rq_with_no_guc_id; - spin_unlock_irqrestore(&guc->sched_engine->lock, flags); + ++gse->total_num_rq_with_no_guc_id; + spin_unlock_irqrestore(&gse->sched_engine.lock, flags); } return ret; } -static void set_and_update_guc_ids_exhausted(struct intel_guc *guc) +static void set_and_update_guc_ids_exhausted(struct guc_submit_engine *gse) { unsigned long flags; - spin_lock_irqsave(&guc->sched_engine->lock, flags); - ++guc->total_num_rq_with_no_guc_id; - set_bit(GUC_STATE_GUC_IDS_EXHAUSTED, &guc->flags); - spin_unlock_irqrestore(&guc->sched_engine->lock, flags); + spin_lock_irqsave(&gse->sched_engine.lock, flags); + ++gse->total_num_rq_with_no_guc_id; + set_bit(GSE_STATE_GUC_IDS_EXHAUSTED, &gse->flags); + spin_unlock_irqrestore(&gse->sched_engine.lock, flags); } -static void clr_guc_ids_exhausted(struct intel_guc *guc) +static void clr_guc_ids_exhausted(struct guc_submit_engine *gse) { - lockdep_assert_held(&guc->sched_engine->lock); - GEM_BUG_ON(guc->total_num_rq_with_no_guc_id); + lockdep_assert_held(&gse->sched_engine.lock); + GEM_BUG_ON(gse->total_num_rq_with_no_guc_id); - clear_bit(GUC_STATE_GUC_IDS_EXHAUSTED, &guc->flags); + clear_bit(GSE_STATE_GUC_IDS_EXHAUSTED, &gse->flags); } /* @@ -416,6 +417,20 @@ static inline struct intel_guc *ce_to_guc(struct intel_context *ce) return &ce->engine->gt->uc.guc; } +static inline struct i915_sched_engine * +ce_to_sched_engine(struct intel_context *ce) +{ + return ce->engine->sched_engine; +} + +static inline struct i915_sched_engine * +guc_to_sched_engine(struct intel_guc *guc, int index) +{ + GEM_BUG_ON(index < 0 || index >= GUC_SUBMIT_ENGINE_MAX); + + return &guc->gse[index]->sched_engine; +} + static inline struct i915_priolist *to_priolist(struct rb_node *rb) { return rb_entry(rb, struct i915_priolist, node); @@ -637,19 +652,20 @@ static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq) return err; } -static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) +static int gse_add_request(struct guc_submit_engine *gse, + struct i915_request *rq) { int ret; - lockdep_assert_held(&guc->sched_engine->lock); + lockdep_assert_held(&gse->sched_engine.lock); - ret = __guc_add_request(guc, rq); + ret = __guc_add_request(gse->sched_engine.private_data, rq); if (ret == -EBUSY) { - guc->stalled_rq = rq; - guc->submission_stall_reason = STALL_ADD_REQUEST; + gse->stalled_rq = rq; + gse->submission_stall_reason = STALL_ADD_REQUEST; } else { - guc->stalled_rq = NULL; - guc->submission_stall_reason = STALL_NONE; + gse->stalled_rq = NULL; + gse->submission_stall_reason = STALL_NONE; } return ret; @@ -657,14 +673,15 @@ static int guc_add_request(struct intel_guc *guc, struct i915_request *rq) static int guc_lrc_desc_pin(struct intel_context *ce, bool loop); -static int tasklet_register_context(struct intel_guc *guc, +static int tasklet_register_context(struct guc_submit_engine *gse, struct i915_request *rq) { struct intel_context *ce = rq->context; + struct intel_guc *guc = gse->sched_engine.private_data; int ret = 0; /* Check state */ - lockdep_assert_held(&guc->sched_engine->lock); + lockdep_assert_held(&gse->sched_engine.lock); GEM_BUG_ON(ce->guc_num_rq_submit_no_id); GEM_BUG_ON(request_has_no_guc_id(rq)); GEM_BUG_ON(context_guc_id_invalid(ce)); @@ -687,11 +704,11 @@ static int tasklet_register_context(struct intel_guc *guc, clr_context_needs_register(ce); if (unlikely(ret == -EBUSY)) { - guc->stalled_rq = rq; - guc->submission_stall_reason = STALL_REGISTER_CONTEXT; + gse->stalled_rq = rq; + gse->submission_stall_reason = STALL_REGISTER_CONTEXT; } else if (unlikely(ret == -EINPROGRESS)) { - guc->stalled_rq = rq; - guc->submission_stall_reason = STALL_DEREGISTER_CONTEXT; + gse->stalled_rq = rq; + gse->submission_stall_reason = STALL_DEREGISTER_CONTEXT; } } @@ -710,28 +727,29 @@ static inline int rq_prio(const struct i915_request *rq) return rq->sched.attr.priority; } -static void kick_retire_wq(struct intel_guc *guc) +static void kick_retire_wq(struct guc_submit_engine *gse) { - queue_work(system_unbound_wq, &guc->retire_worker); + queue_work(system_unbound_wq, &gse->retire_worker); } -static int tasklet_pin_guc_id(struct intel_guc *guc, struct i915_request *rq); +static int tasklet_pin_guc_id(struct guc_submit_engine *gse, + struct i915_request *rq); -static int guc_dequeue_one_context(struct intel_guc *guc) +static int gse_dequeue_one_context(struct guc_submit_engine *gse) { - struct i915_sched_engine * const sched_engine = guc->sched_engine; - struct i915_request *last = guc->stalled_rq; + struct i915_sched_engine * const sched_engine = &gse->sched_engine; + struct i915_request *last = gse->stalled_rq; bool submit = !!last; struct rb_node *rb; int ret; lockdep_assert_held(&sched_engine->lock); - GEM_BUG_ON(guc->stalled_context); - GEM_BUG_ON(!submit && guc->submission_stall_reason); + GEM_BUG_ON(gse->stalled_context); + GEM_BUG_ON(!submit && gse->submission_stall_reason); if (submit) { /* Flow control conditions */ - switch (guc->submission_stall_reason) { + switch (gse->submission_stall_reason) { case STALL_GUC_ID_TASKLET: goto done; case STALL_REGISTER_CONTEXT: @@ -744,8 +762,8 @@ static int guc_dequeue_one_context(struct intel_guc *guc) GEM_BUG_ON("Invalid stall state"); } } else { - GEM_BUG_ON(!guc->total_num_rq_with_no_guc_id && - guc_ids_exhausted(guc)); + GEM_BUG_ON(!gse->total_num_rq_with_no_guc_id && + guc_ids_exhausted(gse)); while ((rb = rb_first_cached(&sched_engine->queue))) { struct i915_priolist *p = to_priolist(rb); @@ -774,13 +792,13 @@ static int guc_dequeue_one_context(struct intel_guc *guc) struct intel_context *ce = last->context; if (ce->guc_num_rq_submit_no_id) { - ret = tasklet_pin_guc_id(guc, last); + ret = tasklet_pin_guc_id(gse, last); if (ret) goto blk_tasklet_kick; } register_context: - ret = tasklet_register_context(guc, last); + ret = tasklet_register_context(gse, last); if (unlikely(ret == -EINPROGRESS)) goto blk_tasklet; else if (unlikely(ret == -EPIPE)) @@ -796,7 +814,7 @@ static int guc_dequeue_one_context(struct intel_guc *guc) guc_set_lrc_tail(last); add_request: - ret = guc_add_request(guc, last); + ret = gse_add_request(gse, last); if (unlikely(ret == -EPIPE)) goto deadlk; else if (ret == -EBUSY) @@ -811,8 +829,8 @@ static int guc_dequeue_one_context(struct intel_guc *guc) * No requests without a guc_id, enable guc_id allocation at request * creation time (guc_request_alloc). */ - if (!guc->total_num_rq_with_no_guc_id) - clr_guc_ids_exhausted(guc); + if (!gse->total_num_rq_with_no_guc_id) + clr_guc_ids_exhausted(gse); return submit; @@ -826,25 +844,26 @@ static int guc_dequeue_one_context(struct intel_guc *guc) return false; blk_tasklet_kick: - kick_retire_wq(guc); + kick_retire_wq(gse); blk_tasklet: - set_tasklet_blocked(guc); + set_tasklet_blocked(gse); return false; } -static void guc_submission_tasklet(struct tasklet_struct *t) +static void gse_submission_tasklet(struct tasklet_struct *t) { struct i915_sched_engine *sched_engine = from_tasklet(sched_engine, t, tasklet); - struct intel_guc *guc = sched_engine->private_data; + struct guc_submit_engine *gse = + container_of(sched_engine, typeof(*gse), sched_engine); unsigned long flags; bool loop; spin_lock_irqsave(&sched_engine->lock, flags); - if (likely(!tasklet_blocked(guc))) + if (likely(!tasklet_blocked(gse))) do { - loop = guc_dequeue_one_context(guc); + loop = gse_dequeue_one_context(gse); } while (loop); i915_sched_engine_reset_on_empty(sched_engine); @@ -919,69 +938,99 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc) } } -static inline bool -submission_disabled(struct intel_guc *guc) +static bool submission_disabled(struct intel_guc *guc) { - struct i915_sched_engine * const sched_engine = guc->sched_engine; + int i; + + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { + struct i915_sched_engine *sched_engine; + + if (unlikely(!guc->gse[i])) + return true; + + sched_engine = guc_to_sched_engine(guc, i); + + if (unlikely(!__tasklet_is_enabled(&sched_engine->tasklet))) + return true; + } - return unlikely(!sched_engine || - !__tasklet_is_enabled(&sched_engine->tasklet)); + return false; } -static void kick_tasklet(struct intel_guc *guc) +static void kick_tasklet(struct guc_submit_engine *gse) { - struct i915_sched_engine * const sched_engine = guc->sched_engine; + struct i915_sched_engine *sched_engine = &gse->sched_engine; - if (likely(!tasklet_blocked(guc))) + if (likely(!tasklet_blocked(gse))) tasklet_hi_schedule(&sched_engine->tasklet); } static void disable_submission(struct intel_guc *guc) { - struct i915_sched_engine * const sched_engine = guc->sched_engine; + int i; + + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { + struct i915_sched_engine *sched_engine = + guc_to_sched_engine(guc, i); - if (__tasklet_is_enabled(&sched_engine->tasklet)) { - GEM_BUG_ON(!guc->ct.enabled); - __tasklet_disable_sync_once(&sched_engine->tasklet); - sched_engine->tasklet.callback = NULL; + if (__tasklet_is_enabled(&sched_engine->tasklet)) { + GEM_BUG_ON(!guc->ct.enabled); + __tasklet_disable_sync_once(&sched_engine->tasklet); + sched_engine->tasklet.callback = NULL; + } } } static void enable_submission(struct intel_guc *guc) { - struct i915_sched_engine * const sched_engine = guc->sched_engine; unsigned long flags; + int i; - spin_lock_irqsave(&guc->sched_engine->lock, flags); - sched_engine->tasklet.callback = guc_submission_tasklet; - wmb(); - if (!__tasklet_is_enabled(&sched_engine->tasklet) && - __tasklet_enable(&sched_engine->tasklet)) { - GEM_BUG_ON(!guc->ct.enabled); - - /* Reset tasklet state */ - guc->stalled_rq = NULL; - if (guc->stalled_context) - intel_context_put(guc->stalled_context); - guc->stalled_context = NULL; - guc->submission_stall_reason = STALL_NONE; - guc->flags = 0; + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { + struct i915_sched_engine *sched_engine = + guc_to_sched_engine(guc, i); + struct guc_submit_engine *gse = guc->gse[i]; - /* And kick in case we missed a new request submission. */ - kick_tasklet(guc); + spin_lock_irqsave(&sched_engine->lock, flags); + sched_engine->tasklet.callback = gse_submission_tasklet; + wmb(); + if (!__tasklet_is_enabled(&sched_engine->tasklet) && + __tasklet_enable(&sched_engine->tasklet)) { + GEM_BUG_ON(!guc->ct.enabled); + + /* Reset GuC submit engine state */ + gse->stalled_rq = NULL; + if (gse->stalled_context) + intel_context_put(gse->stalled_context); + gse->stalled_context = NULL; + gse->submission_stall_reason = STALL_NONE; + gse->flags = 0; + + /* And kick in case we missed a new request submission. */ + kick_tasklet(gse); + } + spin_unlock_irqrestore(&sched_engine->lock, flags); } - spin_unlock_irqrestore(&guc->sched_engine->lock, flags); } -static void guc_flush_submissions(struct intel_guc *guc) +static void gse_flush_submissions(struct guc_submit_engine *gse) { - struct i915_sched_engine * const sched_engine = guc->sched_engine; + struct i915_sched_engine * const sched_engine = &gse->sched_engine; unsigned long flags; spin_lock_irqsave(&sched_engine->lock, flags); spin_unlock_irqrestore(&sched_engine->lock, flags); } +static void guc_flush_submissions(struct intel_guc *guc) +{ + int i; + + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) + if (likely(guc->gse[i])) + gse_flush_submissions(guc->gse[i]); +} + void intel_guc_submission_reset_prepare(struct intel_guc *guc) { int i; @@ -1163,13 +1212,12 @@ void intel_guc_submission_reset(struct intel_guc *guc, bool stalled) if (intel_context_is_pinned(ce)) __guc_reset_context(ce, stalled); - /* GuC is blown away, drop all references to contexts */ xa_destroy(&guc->context_lookup); } static void guc_cancel_context_requests(struct intel_context *ce) { - struct i915_sched_engine *sched_engine = ce_to_guc(ce)->sched_engine; + struct i915_sched_engine *sched_engine = ce_to_sched_engine(ce); struct i915_request *rq; unsigned long flags; @@ -1184,8 +1232,9 @@ static void guc_cancel_context_requests(struct intel_context *ce) } static void -guc_cancel_sched_engine_requests(struct i915_sched_engine *sched_engine) +gse_cancel_requests(struct guc_submit_engine *gse) { + struct i915_sched_engine *sched_engine = &gse->sched_engine; struct i915_request *rq, *rn; struct rb_node *rb; unsigned long flags; @@ -1242,12 +1291,14 @@ void intel_guc_submission_cancel_requests(struct intel_guc *guc) { struct intel_context *ce; unsigned long index; + int i; xa_for_each(&guc->context_lookup, index, ce) if (intel_context_is_pinned(ce)) guc_cancel_context_requests(ce); - guc_cancel_sched_engine_requests(guc->sched_engine); + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) + gse_cancel_requests(guc->gse[i]); /* GuC is blown away, drop all references to contexts */ xa_destroy(&guc->context_lookup); @@ -1274,13 +1325,13 @@ void intel_guc_submission_reset_finish(struct intel_guc *guc) intel_gt_unpark_heartbeats(guc_to_gt(guc)); } -static void retire_worker_sched_disable(struct intel_guc *guc, +static void retire_worker_sched_disable(struct guc_submit_engine *gse, struct intel_context *ce); static void retire_worker_func(struct work_struct *w) { - struct intel_guc *guc = - container_of(w, struct intel_guc, retire_worker); + struct guc_submit_engine *gse = + container_of(w, struct guc_submit_engine, retire_worker); /* * It is possible that another thread issues the schedule disable + that @@ -1288,17 +1339,17 @@ static void retire_worker_func(struct work_struct *w) * where nothing needs to be done here. Let's be paranoid and kick the * tasklet in that case. */ - if (guc->submission_stall_reason != STALL_SCHED_DISABLE && - guc->submission_stall_reason != STALL_GUC_ID_WORKQUEUE) { - kick_tasklet(guc); + if (gse->submission_stall_reason != STALL_SCHED_DISABLE && + gse->submission_stall_reason != STALL_GUC_ID_WORKQUEUE) { + kick_tasklet(gse); return; } - if (guc->submission_stall_reason == STALL_SCHED_DISABLE) { - GEM_BUG_ON(!guc->stalled_context); - GEM_BUG_ON(context_guc_id_invalid(guc->stalled_context)); + if (gse->submission_stall_reason == STALL_SCHED_DISABLE) { + GEM_BUG_ON(!gse->stalled_context); + GEM_BUG_ON(context_guc_id_invalid(gse->stalled_context)); - retire_worker_sched_disable(guc, guc->stalled_context); + retire_worker_sched_disable(gse, gse->stalled_context); } /* @@ -1306,16 +1357,16 @@ static void retire_worker_func(struct work_struct *w) * albeit after possibly issuing a schedule disable as that is async * operation. */ - intel_gt_retire_requests(guc_to_gt(guc)); + intel_gt_retire_requests(guc_to_gt(gse->sched_engine.private_data)); - if (guc->submission_stall_reason == STALL_GUC_ID_WORKQUEUE) { - GEM_BUG_ON(guc->stalled_context); + if (gse->submission_stall_reason == STALL_GUC_ID_WORKQUEUE) { + GEM_BUG_ON(gse->stalled_context); /* Hopefully guc_ids are now available, kick tasklet */ - guc->submission_stall_reason = STALL_GUC_ID_TASKLET; - clr_tasklet_blocked(guc); + gse->submission_stall_reason = STALL_GUC_ID_TASKLET; + clr_tasklet_blocked(gse); - kick_tasklet(guc); + kick_tasklet(gse); } } @@ -1346,18 +1397,24 @@ int intel_guc_submission_init(struct intel_guc *guc) INIT_LIST_HEAD(&guc->guc_id_list_unpinned); ida_init(&guc->guc_ids); - INIT_WORK(&guc->retire_worker, retire_worker_func); - return 0; } void intel_guc_submission_fini(struct intel_guc *guc) { + int i; + if (!guc->lrc_desc_pool) return; guc_lrc_desc_pool_destroy(guc); - i915_sched_engine_put(guc->sched_engine); + + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { + struct i915_sched_engine *sched_engine = + guc_to_sched_engine(guc, i); + + i915_sched_engine_put(sched_engine); + } } static inline void queue_request(struct i915_sched_engine *sched_engine, @@ -1372,22 +1429,23 @@ static inline void queue_request(struct i915_sched_engine *sched_engine, set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); if (empty) - kick_tasklet(&rq->engine->gt->uc.guc); + kick_tasklet(ce_to_gse(rq->context)); } /* Macro to tweak heuristic, using a simple over 50% not ready for now */ #define TOO_MANY_GUC_IDS_NOT_READY(avail, consumed) \ (consumed > avail / 2) -static bool too_many_guc_ids_not_ready(struct intel_guc *guc, +static bool too_many_guc_ids_not_ready(struct guc_submit_engine *gse, struct intel_context *ce) { u32 available_guc_ids, guc_ids_consumed; + struct intel_guc *guc = gse->sched_engine.private_data; available_guc_ids = guc->num_guc_ids; - guc_ids_consumed = atomic_read(&guc->num_guc_ids_not_ready); + guc_ids_consumed = atomic_read(&gse->num_guc_ids_not_ready); if (TOO_MANY_GUC_IDS_NOT_READY(available_guc_ids, guc_ids_consumed)) { - set_and_update_guc_ids_exhausted(guc); + set_and_update_guc_ids_exhausted(gse); return true; } @@ -1396,34 +1454,36 @@ static bool too_many_guc_ids_not_ready(struct intel_guc *guc, static void incr_num_rq_not_ready(struct intel_context *ce) { - struct intel_guc *guc = ce_to_guc(ce); + struct guc_submit_engine *gse = ce_to_gse(ce); if (!atomic_fetch_add(1, &ce->guc_num_rq_not_ready)) - atomic_inc(&guc->num_guc_ids_not_ready); + atomic_inc(&gse->num_guc_ids_not_ready); } void intel_guc_decr_num_rq_not_ready(struct intel_context *ce) { - struct intel_guc *guc = ce_to_guc(ce); + struct guc_submit_engine *gse = ce_to_gse(ce); - if (atomic_fetch_add(-1, &ce->guc_num_rq_not_ready) == 1) - atomic_dec(&guc->num_guc_ids_not_ready); + if (atomic_fetch_add(-1, &ce->guc_num_rq_not_ready) == 1) { + GEM_BUG_ON(!atomic_read(&gse->num_guc_ids_not_ready)); + atomic_dec(&gse->num_guc_ids_not_ready); + } } -static bool need_tasklet(struct intel_guc *guc, struct intel_context *ce) +static bool need_tasklet(struct guc_submit_engine *gse, struct intel_context *ce) { - struct i915_sched_engine * const sched_engine = - ce->engine->sched_engine; + struct i915_sched_engine * const sched_engine = &gse->sched_engine; + struct intel_guc *guc = gse->sched_engine.private_data; lockdep_assert_held(&sched_engine->lock); - return guc_ids_exhausted(guc) || submission_disabled(guc) || - guc->stalled_rq || guc->stalled_context || + return guc_ids_exhausted(gse) || submission_disabled(guc) || + gse->stalled_rq || gse->stalled_context || !lrc_desc_registered(guc, ce->guc_id) || !i915_sched_engine_is_empty(sched_engine); } -static int guc_bypass_tasklet_submit(struct intel_guc *guc, +static int gse_bypass_tasklet_submit(struct guc_submit_engine *gse, struct i915_request *rq) { int ret; @@ -1433,27 +1493,27 @@ static int guc_bypass_tasklet_submit(struct intel_guc *guc, trace_i915_request_in(rq, 0); guc_set_lrc_tail(rq); - ret = guc_add_request(guc, rq); + ret = gse_add_request(gse, rq); if (unlikely(ret == -EPIPE)) - disable_submission(guc); + disable_submission(gse->sched_engine.private_data); return ret; } static void guc_submit_request(struct i915_request *rq) { - struct i915_sched_engine *sched_engine = rq->engine->sched_engine; - struct intel_guc *guc = &rq->engine->gt->uc.guc; + struct guc_submit_engine *gse = ce_to_gse(rq->context); + struct i915_sched_engine *sched_engine = &gse->sched_engine; unsigned long flags; /* Will be called from irq-context when using foreign fences. */ spin_lock_irqsave(&sched_engine->lock, flags); - if (need_tasklet(guc, rq->context)) + if (need_tasklet(gse, rq->context)) queue_request(sched_engine, rq, rq_prio(rq)); - else if (guc_bypass_tasklet_submit(guc, rq) == -EBUSY) - kick_tasklet(guc); + else if (gse_bypass_tasklet_submit(gse, rq) == -EBUSY) + kick_tasklet(gse); spin_unlock_irqrestore(&sched_engine->lock, flags); @@ -1532,8 +1592,9 @@ static int steal_guc_id(struct intel_guc *guc, bool unpinned) * context. */ if (!unpinned) { - GEM_BUG_ON(guc->stalled_context); - guc->stalled_context = intel_context_get(ce); + GEM_BUG_ON(ce_to_gse(ce)->stalled_context); + + ce_to_gse(ce)->stalled_context = intel_context_get(ce); set_context_guc_id_stolen(ce); } else { set_context_guc_id_invalid(ce); @@ -1595,7 +1656,7 @@ static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce, try_again: spin_lock_irqsave(&guc->contexts_lock, flags); - if (!tasklet && guc_ids_exhausted(guc)) { + if (!tasklet && guc_ids_exhausted(ce_to_gse(ce))) { ret = -EAGAIN; goto out_unlock; } @@ -2084,7 +2145,7 @@ static void guc_context_ban(struct intel_context *ce, struct i915_request *rq) intel_wakeref_t wakeref; unsigned long flags; - guc_flush_submissions(guc); + gse_flush_submissions(ce_to_gse(ce)); spin_lock_irqsave(&ce->guc_state.lock, flags); set_context_banned(ce); @@ -2172,7 +2233,7 @@ static void guc_context_sched_disable(struct intel_context *ce) spin_unlock_irqrestore(&ce->guc_state.lock, flags); with_intel_runtime_pm(runtime_pm, wakeref) - __guc_context_sched_disable(guc, ce, guc_id); + __guc_context_sched_disable(ce_to_guc(ce), ce, guc_id); return; unpin: @@ -2428,7 +2489,7 @@ static void remove_from_context(struct i915_request *rq) if (likely(!request_has_no_guc_id(rq))) atomic_dec(&ce->guc_id_ref); else - --ce_to_guc(rq->context)->total_num_rq_with_no_guc_id; + --ce_to_gse(rq->context)->total_num_rq_with_no_guc_id; unpin_guc_id(ce_to_guc(ce), ce, false); i915_request_notify_execute_cb_imm(rq); @@ -2489,13 +2550,14 @@ static void invalidate_guc_id_sched_disable(struct intel_context *ce) clr_context_guc_id_stolen(ce); } -static void retire_worker_sched_disable(struct intel_guc *guc, +static void retire_worker_sched_disable(struct guc_submit_engine *gse, struct intel_context *ce) { + struct intel_guc *guc = gse->sched_engine.private_data; unsigned long flags; bool disabled; - guc->stalled_context = NULL; + gse->stalled_context = NULL; spin_lock_irqsave(&ce->guc_state.lock, flags); disabled = submission_disabled(guc); if (!disabled && !context_pending_disable(ce) && context_enabled(ce)) { @@ -2541,10 +2603,10 @@ static void retire_worker_sched_disable(struct intel_guc *guc, invalidate_guc_id_sched_disable(ce); - guc->submission_stall_reason = STALL_REGISTER_CONTEXT; - clr_tasklet_blocked(guc); + gse->submission_stall_reason = STALL_REGISTER_CONTEXT; + clr_tasklet_blocked(gse); - kick_tasklet(ce_to_guc(ce)); + kick_tasklet(gse); } intel_context_put(ce); @@ -2557,25 +2619,26 @@ static bool context_needs_lrc_desc_pin(struct intel_context *ce, bool new_guc_id !submission_disabled(ce_to_guc(ce)); } -static int tasklet_pin_guc_id(struct intel_guc *guc, struct i915_request *rq) +static int tasklet_pin_guc_id(struct guc_submit_engine *gse, + struct i915_request *rq) { struct intel_context *ce = rq->context; int ret = 0; - lockdep_assert_held(&guc->sched_engine->lock); + lockdep_assert_held(&gse->sched_engine.lock); GEM_BUG_ON(!ce->guc_num_rq_submit_no_id); if (atomic_add_unless(&ce->guc_id_ref, ce->guc_num_rq_submit_no_id, 0)) goto out; - ret = pin_guc_id(guc, ce, true); + ret = pin_guc_id(gse->sched_engine.private_data, ce, true); if (unlikely(ret < 0)) { /* * No guc_ids available, disable the tasklet and kick the retire * workqueue hopefully freeing up some guc_ids. */ - guc->stalled_rq = rq; - guc->submission_stall_reason = STALL_GUC_ID_WORKQUEUE; + gse->stalled_rq = rq; + gse->submission_stall_reason = STALL_GUC_ID_WORKQUEUE; return ret; } @@ -2587,14 +2650,14 @@ static int tasklet_pin_guc_id(struct intel_guc *guc, struct i915_request *rq) set_context_needs_register(ce); if (ret == NEW_GUC_ID_ENABLED) { - guc->stalled_rq = rq; - guc->submission_stall_reason = STALL_SCHED_DISABLE; + gse->stalled_rq = rq; + gse->submission_stall_reason = STALL_SCHED_DISABLE; } clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); out: - guc->total_num_rq_with_no_guc_id -= ce->guc_num_rq_submit_no_id; - GEM_BUG_ON(guc->total_num_rq_with_no_guc_id < 0); + gse->total_num_rq_with_no_guc_id -= ce->guc_num_rq_submit_no_id; + GEM_BUG_ON(gse->total_num_rq_with_no_guc_id < 0); list_for_each_entry_reverse(rq, &ce->guc_active.requests, sched.link) if (request_has_no_guc_id(rq)) { @@ -2612,7 +2675,7 @@ static int tasklet_pin_guc_id(struct intel_guc *guc, struct i915_request *rq) * from a context that has scheduling enabled. We have to disable * scheduling before deregistering the context and it isn't safe to do * in the tasklet because of lock inversion (ce->guc_state.lock must be - * acquired before guc->sched_engine->lock). To work around this + * acquired before gse->sched_engine.lock). To work around this * we do the schedule disable in retire workqueue and block the tasklet * until the schedule done G2H returns. Returning non-zero here kicks * the workqueue. @@ -2624,6 +2687,7 @@ static int guc_request_alloc(struct i915_request *rq) { struct intel_context *ce = rq->context; struct intel_guc *guc = ce_to_guc(ce); + struct guc_submit_engine *gse = ce_to_gse(ce); unsigned long flags; int ret; @@ -2635,8 +2699,8 @@ static int guc_request_alloc(struct i915_request *rq) * ready to submit). Don't allocate one here, defer to submission in the * tasklet. */ - if (test_and_update_guc_ids_exhausted(guc) || - too_many_guc_ids_not_ready(guc, ce)) { + if (test_and_update_guc_ids_exhausted(gse) || + too_many_guc_ids_not_ready(gse, ce)) { set_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); goto out; } @@ -2691,7 +2755,7 @@ static int guc_request_alloc(struct i915_request *rq) * submissions we return to allocating guc_ids in this function. */ set_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); - set_and_update_guc_ids_exhausted(guc); + set_and_update_guc_ids_exhausted(gse); incr_num_rq_not_ready(ce); return 0; @@ -3099,17 +3163,41 @@ static void guc_sched_engine_destroy(struct kref *kref) { struct i915_sched_engine *sched_engine = container_of(kref, typeof(*sched_engine), ref); - struct intel_guc *guc = sched_engine->private_data; + struct guc_submit_engine *gse = + container_of(sched_engine, typeof(*gse), sched_engine); + struct intel_guc *guc = gse->sched_engine.private_data; - guc->sched_engine = NULL; + guc->gse[gse->id] = NULL; tasklet_kill(&sched_engine->tasklet); /* flush the callback */ - kfree(sched_engine); + kfree(gse); +} + +static void guc_submit_engine_init(struct intel_guc *guc, + struct guc_submit_engine *gse, + int id) +{ + struct i915_sched_engine *sched_engine = &gse->sched_engine; + + i915_sched_engine_init(sched_engine, ENGINE_VIRTUAL); + INIT_WORK(&gse->retire_worker, retire_worker_func); + tasklet_setup(&sched_engine->tasklet, gse_submission_tasklet); + sched_engine->schedule = i915_schedule; + sched_engine->disabled = guc_sched_engine_disabled; + sched_engine->destroy = guc_sched_engine_destroy; + sched_engine->bump_inflight_request_prio = + guc_bump_inflight_request_prio; + sched_engine->retire_inflight_request_prio = + guc_retire_inflight_request_prio; + sched_engine->private_data = guc; + gse->id = id; } int intel_guc_submission_setup(struct intel_engine_cs *engine) { struct drm_i915_private *i915 = engine->i915; struct intel_guc *guc = &engine->gt->uc.guc; + struct i915_sched_engine *sched_engine; + int ret, i; /* * The setup relies on several assumptions (e.g. irqs always enabled) @@ -3117,24 +3205,20 @@ int intel_guc_submission_setup(struct intel_engine_cs *engine) */ GEM_BUG_ON(GRAPHICS_VER(i915) < 11); - if (!guc->sched_engine) { - guc->sched_engine = i915_sched_engine_create(ENGINE_VIRTUAL); - if (!guc->sched_engine) - return -ENOMEM; - - guc->sched_engine->schedule = i915_schedule; - guc->sched_engine->disabled = guc_sched_engine_disabled; - guc->sched_engine->private_data = guc; - guc->sched_engine->destroy = guc_sched_engine_destroy; - guc->sched_engine->bump_inflight_request_prio = - guc_bump_inflight_request_prio; - guc->sched_engine->retire_inflight_request_prio = - guc_retire_inflight_request_prio; - tasklet_setup(&guc->sched_engine->tasklet, - guc_submission_tasklet); + if (!guc->gse[0]) { + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { + guc->gse[i] = kzalloc(sizeof(*guc->gse[i]), GFP_KERNEL); + if (!guc->gse[i]) { + ret = -ENOMEM; + goto put_sched_engine; + } + guc_submit_engine_init(guc, guc->gse[i], i); + } } + + sched_engine = guc_to_sched_engine(guc, GUC_SUBMIT_ENGINE_SINGLE_LRC); i915_sched_engine_put(engine->sched_engine); - engine->sched_engine = i915_sched_engine_get(guc->sched_engine); + engine->sched_engine = i915_sched_engine_get(sched_engine); guc_default_vfuncs(engine); guc_default_irqs(engine); @@ -3150,6 +3234,16 @@ int intel_guc_submission_setup(struct intel_engine_cs *engine) engine->release = guc_release; return 0; + +put_sched_engine: + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { + struct i915_sched_engine *sched_engine = + guc_to_sched_engine(guc, i); + + if (sched_engine) + i915_sched_engine_put(sched_engine); + } + return ret; } void intel_guc_submission_enable(struct intel_guc *guc) @@ -3249,14 +3343,16 @@ int intel_guc_deregister_done_process_msg(struct intel_guc *guc, register_context(ce, true); guc_signal_context_fence(ce); if (context_block_tasklet(ce)) { - GEM_BUG_ON(guc->submission_stall_reason != + struct guc_submit_engine *gse = ce_to_gse(ce); + + GEM_BUG_ON(gse->submission_stall_reason != STALL_DEREGISTER_CONTEXT); clr_context_block_tasklet(ce); - guc->submission_stall_reason = STALL_MOVE_LRC_TAIL; - clr_tasklet_blocked(guc); + gse->submission_stall_reason = STALL_MOVE_LRC_TAIL; + clr_tasklet_blocked(gse); - kick_tasklet(ce_to_guc(ce)); + kick_tasklet(gse); } intel_context_put(ce); } else if (context_destroyed(ce)) { @@ -3322,11 +3418,13 @@ int intel_guc_sched_done_process_msg(struct intel_guc *guc, spin_unlock_irqrestore(&ce->guc_state.lock, flags); if (context_block_tasklet(ce)) { + struct guc_submit_engine *gse = ce_to_gse(ce); + clr_context_block_tasklet(ce); - guc->submission_stall_reason = STALL_REGISTER_CONTEXT; - clr_tasklet_blocked(guc); + gse->submission_stall_reason = STALL_REGISTER_CONTEXT; + clr_tasklet_blocked(gse); - kick_tasklet(ce_to_guc(ce)); + kick_tasklet(gse); } if (banned) { @@ -3358,7 +3456,7 @@ static void capture_error_state(struct intel_guc *guc, static void guc_context_replay(struct intel_context *ce) { __guc_reset_context(ce, true); - kick_tasklet(ce_to_guc(ce)); + kick_tasklet(ce_to_gse(ce)); } static void guc_handle_context_reset(struct intel_guc *guc, @@ -3503,35 +3601,32 @@ void intel_guc_dump_active_requests(struct intel_engine_cs *engine, } } -void intel_guc_submission_print_info(struct intel_guc *guc, - struct drm_printer *p) +static void gse_log_submission_info(struct guc_submit_engine *gse, + struct drm_printer *p, int id) { - struct i915_sched_engine *sched_engine = guc->sched_engine; + struct i915_sched_engine *sched_engine = &gse->sched_engine; struct rb_node *rb; unsigned long flags; if (!sched_engine) return; - drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n", - atomic_read(&guc->outstanding_submission_g2h)); - drm_printf(p, "GuC Number GuC IDs: %u\n", guc->num_guc_ids); - drm_printf(p, "GuC Max GuC IDs: %u\n", guc->max_guc_ids); - drm_printf(p, "GuC tasklet count: %u\n", + drm_printf(p, "GSE[%d] tasklet count: %u\n", id, atomic_read(&sched_engine->tasklet.count)); - drm_printf(p, "GuC submit flags: 0x%04lx\n", guc->flags); - drm_printf(p, "GuC total number request without guc_id: %d\n", - guc->total_num_rq_with_no_guc_id); - drm_printf(p, "GuC Number GuC IDs not ready: %d\n", - atomic_read(&guc->num_guc_ids_not_ready)); - drm_printf(p, "GuC stall reason: %d\n", guc->submission_stall_reason); - drm_printf(p, "GuC stalled request: %s\n", - yesno(guc->stalled_rq)); - drm_printf(p, "GuC stalled context: %s\n\n", - yesno(guc->stalled_context)); + drm_printf(p, "GSE[%d] submit flags: 0x%04lx\n", id, gse->flags); + drm_printf(p, "GSE[%d] total number request without guc_id: %d\n", + id, gse->total_num_rq_with_no_guc_id); + drm_printf(p, "GSE[%d] Number GuC IDs not ready: %d\n", + id, atomic_read(&gse->num_guc_ids_not_ready)); + drm_printf(p, "GSE[%d] stall reason: %d\n", + id, gse->submission_stall_reason); + drm_printf(p, "GSE[%d] stalled request: %s\n", + id, yesno(gse->stalled_rq)); + drm_printf(p, "GSE[%d] stalled context: %s\n\n", + id, yesno(gse->stalled_context)); spin_lock_irqsave(&sched_engine->lock, flags); - drm_printf(p, "Requests in GuC submit tasklet:\n"); + drm_printf(p, "Requests in GSE[%d] submit tasklet:\n", id); for (rb = rb_first_cached(&sched_engine->queue); rb; rb = rb_next(rb)) { struct i915_priolist *pl = to_priolist(rb); struct i915_request *rq; @@ -3561,6 +3656,20 @@ static inline void guc_log_context_priority(struct drm_printer *p, drm_printf(p, "\n"); } +void intel_guc_submission_print_info(struct intel_guc *guc, + struct drm_printer *p) +{ + int i; + + drm_printf(p, "GuC Number Outstanding Submission G2H: %u\n", + atomic_read(&guc->outstanding_submission_g2h)); + drm_printf(p, "GuC Number GuC IDs: %d\n", guc->num_guc_ids); + drm_printf(p, "GuC Max Number GuC IDs: %d\n\n", guc->max_guc_ids); + + for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) + gse_log_submission_info(guc->gse[i], p, i); +} + void intel_guc_submission_print_context_info(struct intel_guc *guc, struct drm_printer *p) { @@ -3594,6 +3703,7 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) { struct guc_virtual_engine *ve; struct intel_guc *guc; + struct i915_sched_engine *sched_engine; unsigned int n; int err; @@ -3602,6 +3712,7 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) return ERR_PTR(-ENOMEM); guc = &siblings[0]->gt->uc.guc; + sched_engine = guc_to_sched_engine(guc, GUC_SUBMIT_ENGINE_SINGLE_LRC); ve->base.i915 = siblings[0]->i915; ve->base.gt = siblings[0]->gt; @@ -3615,7 +3726,7 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) snprintf(ve->base.name, sizeof(ve->base.name), "virtual"); - ve->base.sched_engine = i915_sched_engine_get(guc->sched_engine); + ve->base.sched_engine = i915_sched_engine_get(sched_engine); ve->base.cops = &virtual_guc_context_ops; ve->base.request_alloc = guc_request_alloc; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h new file mode 100644 index 000000000000..0c224ab18c02 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2014-2019 Intel Corporation + */ + +#ifndef _INTEL_GUC_SUBMISSION_TYPES_H_ +#define _INTEL_GUC_SUBMISSION_TYPES_H_ + +#include "gt/intel_engine_types.h" +#include "gt/intel_context_types.h" +#include "i915_scheduler_types.h" + +struct intel_guc; +struct i915_request; + +/* GuC Virtual Engine */ +struct guc_virtual_engine { + struct intel_engine_cs base; + struct intel_context context; +}; + +/* + * Object which encapsulates the globally operated on i915_sched_engine + + * the GuC submission state machine described in intel_guc_submission.c. + */ +struct guc_submit_engine { + struct i915_sched_engine sched_engine; + struct work_struct retire_worker; + struct i915_request *stalled_rq; + struct intel_context *stalled_context; + unsigned long flags; + int total_num_rq_with_no_guc_id; + atomic_t num_guc_ids_not_ready; + int id; + + /* + * Submisson stall reason. See intel_guc_submission.c for detailed + * description. + */ + enum { + STALL_NONE, + STALL_GUC_ID_WORKQUEUE, + STALL_GUC_ID_TASKLET, + STALL_SCHED_DISABLE, + STALL_REGISTER_CONTEXT, + STALL_DEREGISTER_CONTEXT, + STALL_MOVE_LRC_TAIL, + STALL_ADD_REQUEST, + } submission_stall_reason; +}; + +#endif diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 3fccae3672c1..b9530584fc1e 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -452,15 +452,9 @@ static bool default_disabled(struct i915_sched_engine *sched_engine) return false; } -struct i915_sched_engine * -i915_sched_engine_create(unsigned int subclass) +void i915_sched_engine_init(struct i915_sched_engine *sched_engine, + unsigned int subclass) { - struct i915_sched_engine *sched_engine; - - sched_engine = kzalloc(sizeof(*sched_engine), GFP_KERNEL); - if (!sched_engine) - return NULL; - kref_init(&sched_engine->ref); sched_engine->queue = RB_ROOT_CACHED; @@ -485,6 +479,18 @@ i915_sched_engine_create(unsigned int subclass) lock_map_release(&sched_engine->lock.dep_map); local_irq_enable(); #endif +} + +struct i915_sched_engine * +i915_sched_engine_create(unsigned int subclass) +{ + struct i915_sched_engine *sched_engine; + + sched_engine = kzalloc(sizeof(*sched_engine), GFP_KERNEL); + if (!sched_engine) + return NULL; + + i915_sched_engine_init(sched_engine, subclass); return sched_engine; } diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index f4d9811ade5b..c7c093bc2972 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -48,6 +48,9 @@ static inline void i915_priolist_free(struct i915_priolist *p) __i915_priolist_free(p); } +void i915_sched_engine_init(struct i915_sched_engine *sched_engine, + unsigned int subclass); + struct i915_sched_engine * i915_sched_engine_create(unsigned int subclass); From patchwork Tue Jul 20 20:57:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18978C07E95 for ; Tue, 20 Jul 2021 20:40:34 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D9CAF60FEA for ; Tue, 20 Jul 2021 20:40:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D9CAF60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DD69D6E544; Tue, 20 Jul 2021 20:40:19 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id F021B6E4DE; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421880" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421880" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906064" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 07/42] drm/i915/guc: Check return of __xa_store when registering a context Date: Tue, 20 Jul 2021 13:57:27 -0700 Message-Id: <20210720205802.39610-8-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Check return of __xa_store when registering a context as this can fail in a rare case if not memory can not be allocated. If this occurs fall back on the tasklet flow control and try again in the future. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index fce5c1d8cfda..2873018eb36e 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -502,18 +502,24 @@ static inline bool lrc_desc_registered(struct intel_guc *guc, u32 id) return __get_context(guc, id); } -static inline void set_lrc_desc_registered(struct intel_guc *guc, u32 id, +static inline int set_lrc_desc_registered(struct intel_guc *guc, u32 id, struct intel_context *ce) { unsigned long flags; + void *ret; /* * xarray API doesn't have xa_save_irqsave wrapper, so calling the * lower level functions directly. */ xa_lock_irqsave(&guc->context_lookup, flags); - __xa_store(&guc->context_lookup, id, ce, GFP_ATOMIC); + ret = __xa_store(&guc->context_lookup, id, ce, GFP_ATOMIC); xa_unlock_irqrestore(&guc->context_lookup, flags); + + if (unlikely(xa_is_err(ret))) + return -EBUSY; /* Try again in future */ + + return 0; } static int guc_submission_send_busy_loop(struct intel_guc* guc, @@ -1840,7 +1846,9 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) rcu_read_unlock(); reset_lrc_desc(guc, desc_idx); - set_lrc_desc_registered(guc, desc_idx, ce); + ret = set_lrc_desc_registered(guc, desc_idx, ce); + if (unlikely(ret)) + return ret; desc = __get_lrc_desc(guc, desc_idx); desc->engine_class = engine_class_to_guc_class(engine->class); From patchwork Tue Jul 20 20:57:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 833A1C07E95 for ; Tue, 20 Jul 2021 20:40:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4EBDC60FEA for ; Tue, 20 Jul 2021 20:40:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EBDC60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CB9766E52E; Tue, 20 Jul 2021 20:40:27 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 352AA6E50C; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885366" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885366" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906065" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 08/42] drm/i915/guc: Non-static lrc descriptor registration buffer Date: Tue, 20 Jul 2021 13:57:28 -0700 Message-Id: <20210720205802.39610-9-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Dynamically allocate space for lrc descriptor registration with the GuC rather than using a large static buffer indexed by the guc_id. If no space is available to register a context, fall back to tasklet flow control mechanism. Only allow 1/2 of the space to be allocated outside the tasklet to prevent unready requests/contexts from consuming all registration space. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context_types.h | 3 + drivers/gpu/drm/i915/gt/uc/intel_guc.h | 9 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 204 ++++++++++++------ 3 files changed, 152 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 7536129c9a5a..aabc1b349044 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -179,6 +179,9 @@ struct intel_context { /* GuC scheduling state flags that do not require a lock. */ atomic_t guc_sched_state_no_lock; + /* GuC lrc descriptor registration buffer */ + unsigned int guc_lrcd_reg_idx; + /* GuC LRC descriptor ID */ u16 guc_id; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index e278ad376986..3198480f717c 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -69,8 +69,13 @@ struct intel_guc { u32 ads_regset_size; u32 ads_golden_ctxt_size; - struct i915_vma *lrc_desc_pool; - void *lrc_desc_pool_vaddr; + /* GuC LRC descriptor registration */ + struct { + struct i915_vma *vma; + void *vaddr; + struct ida ida; + unsigned int max_idx; + } lrcd_reg; /* guc_id to intel_context lookup */ struct xarray context_lookup; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 2873018eb36e..e1a35f647025 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -436,65 +436,54 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb) return rb_entry(rb, struct i915_priolist, node); } -static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, u32 index) +static u32 __get_lrc_desc_offset(struct intel_guc *guc, int index) { - struct guc_lrc_desc *base = guc->lrc_desc_pool_vaddr; - + GEM_BUG_ON(index >= guc->lrcd_reg.max_idx); GEM_BUG_ON(index >= guc->max_guc_ids); - return &base[index]; + return intel_guc_ggtt_offset(guc, guc->lrcd_reg.vma) + + (index * sizeof(struct guc_lrc_desc)); } -static inline struct intel_context *__get_context(struct intel_guc *guc, u32 id) +static struct guc_lrc_desc *__get_lrc_desc(struct intel_guc *guc, int index) { - struct intel_context *ce = xa_load(&guc->context_lookup, id); + struct guc_lrc_desc *desc; - GEM_BUG_ON(id >= guc->max_guc_ids); + GEM_BUG_ON(index >= guc->lrcd_reg.max_idx); + GEM_BUG_ON(index >= guc->max_guc_ids); - return ce; + desc = guc->lrcd_reg.vaddr; + desc = &desc[index]; + memset(desc, 0, sizeof(*desc)); + + return desc; } -static int guc_lrc_desc_pool_create(struct intel_guc *guc) +static inline struct intel_context *__get_context(struct intel_guc *guc, u32 id) { - u32 size; - int ret; - - size = PAGE_ALIGN(sizeof(struct guc_lrc_desc) * guc->max_guc_ids); - ret = intel_guc_allocate_and_map_vma(guc, size, &guc->lrc_desc_pool, - (void **)&guc->lrc_desc_pool_vaddr); - if (ret) - return ret; + struct intel_context *ce = xa_load(&guc->context_lookup, id); - return 0; -} + GEM_BUG_ON(id >= guc->max_guc_ids); -static void guc_lrc_desc_pool_destroy(struct intel_guc *guc) -{ - guc->lrc_desc_pool_vaddr = NULL; - i915_vma_unpin_and_release(&guc->lrc_desc_pool, I915_VMA_RELEASE_MAP); + return ce; } static inline bool guc_submission_initialized(struct intel_guc *guc) { - return guc->lrc_desc_pool_vaddr != NULL; + return guc->lrcd_reg.max_idx != 0; } -static inline void reset_lrc_desc(struct intel_guc *guc, u32 id) +static inline void clr_lrc_desc_registered(struct intel_guc *guc, u32 id) { - if (likely(guc_submission_initialized(guc))) { - struct guc_lrc_desc *desc = __get_lrc_desc(guc, id); - unsigned long flags; - - memset(desc, 0, sizeof(*desc)); + unsigned long flags; - /* - * xarray API doesn't have xa_erase_irqsave wrapper, so calling - * the lower level functions directly. - */ - xa_lock_irqsave(&guc->context_lookup, flags); - __xa_erase(&guc->context_lookup, id); - xa_unlock_irqrestore(&guc->context_lookup, flags); - } + /* + * xarray API doesn't have xa_erase_irqsave wrapper, so calling + * the lower level functions directly. + */ + xa_lock_irqsave(&guc->context_lookup, flags); + __xa_erase(&guc->context_lookup, id); + xa_unlock_irqrestore(&guc->context_lookup, flags); } static inline bool lrc_desc_registered(struct intel_guc *guc, u32 id) @@ -1376,6 +1365,9 @@ static void retire_worker_func(struct work_struct *w) } } +static int guc_lrcd_reg_init(struct intel_guc *guc); +static void guc_lrcd_reg_fini(struct intel_guc *guc); + /* * Set up the memory resources to be shared with the GuC (via the GGTT) * at firmware loading time. @@ -1384,17 +1376,12 @@ int intel_guc_submission_init(struct intel_guc *guc) { int ret; - if (guc->lrc_desc_pool) + if (guc_submission_initialized(guc)) return 0; - ret = guc_lrc_desc_pool_create(guc); + ret = guc_lrcd_reg_init(guc); if (ret) return ret; - /* - * Keep static analysers happy, let them know that we allocated the - * vma after testing that it didn't exist earlier. - */ - GEM_BUG_ON(!guc->lrc_desc_pool); xa_init_flags(&guc->context_lookup, XA_FLAGS_LOCK_IRQ); @@ -1410,10 +1397,10 @@ void intel_guc_submission_fini(struct intel_guc *guc) { int i; - if (!guc->lrc_desc_pool) + if (!guc_submission_initialized(guc)) return; - guc_lrc_desc_pool_destroy(guc); + guc_lrcd_reg_fini(guc); for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { struct i915_sched_engine *sched_engine = @@ -1486,6 +1473,7 @@ static bool need_tasklet(struct guc_submit_engine *gse, struct intel_context *ce return guc_ids_exhausted(gse) || submission_disabled(guc) || gse->stalled_rq || gse->stalled_context || !lrc_desc_registered(guc, ce->guc_id) || + context_needs_register(ce) || !i915_sched_engine_is_empty(sched_engine); } @@ -1537,7 +1525,7 @@ static void __release_guc_id(struct intel_guc *guc, struct intel_context *ce) { if (!context_guc_id_invalid(ce)) { ida_simple_remove(&guc->guc_ids, ce->guc_id); - reset_lrc_desc(guc, ce->guc_id); + clr_lrc_desc_registered(guc, ce->guc_id); set_context_guc_id_invalid(ce); } if (!list_empty(&ce->guc_id_link)) @@ -1731,14 +1719,14 @@ static void unpin_guc_id(struct intel_guc *guc, } static int __guc_action_register_context(struct intel_guc *guc, + struct intel_context *ce, u32 guc_id, - u32 offset, bool loop) { u32 action[] = { INTEL_GUC_ACTION_REGISTER_CONTEXT, guc_id, - offset, + __get_lrc_desc_offset(guc, ce->guc_lrcd_reg_idx), }; return guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), @@ -1748,13 +1736,11 @@ static int __guc_action_register_context(struct intel_guc *guc, static int register_context(struct intel_context *ce, bool loop) { struct intel_guc *guc = ce_to_guc(ce); - u32 offset = intel_guc_ggtt_offset(guc, guc->lrc_desc_pool) + - ce->guc_id * sizeof(struct guc_lrc_desc); int ret; trace_intel_context_register(ce); - ret = __guc_action_register_context(guc, ce->guc_id, offset, loop); + ret = __guc_action_register_context(guc, ce, ce->guc_id, loop); set_context_registered(ce); return ret; } @@ -1814,6 +1800,86 @@ static void guc_context_policy_init(struct intel_engine_cs *engine, static inline u8 map_i915_prio_to_guc_prio(int prio); +static int alloc_lrcd_reg_idx_buffer(struct intel_guc *guc, int num_per_vma) +{ + u32 size = num_per_vma * sizeof(struct guc_lrc_desc); + struct i915_vma **vma = &guc->lrcd_reg.vma; + void **vaddr = &guc->lrcd_reg.vaddr; + int ret; + + GEM_BUG_ON(!is_power_of_2(size)); + + ret = intel_guc_allocate_and_map_vma(guc, size, vma, vaddr); + if (unlikely(ret)) + return ret; + + guc->lrcd_reg.max_idx += num_per_vma; + + return 0; +} + +static int alloc_lrcd_reg_idx(struct intel_guc *guc, bool tasklet) +{ + int ret; + gfp_t gfp = tasklet ? GFP_ATOMIC : + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN; + + might_sleep_if(!tasklet); + + /* + * We only allow 1/2 of the space to be allocated outside of tasklet + * (flow control) to ensure requests that are not ready don't consume + * all context registration space. + */ + ret = ida_simple_get(&guc->lrcd_reg.ida, 0, + tasklet ? guc->lrcd_reg.max_idx : + guc->lrcd_reg.max_idx / 2, gfp); + if (unlikely(ret < 0)) + return -EBUSY; + + return ret; +} + +static void __free_lrcd_reg_idx(struct intel_guc *guc, struct intel_context *ce) +{ + if (ce->guc_lrcd_reg_idx && guc->lrcd_reg.max_idx) { + ida_simple_remove(&guc->lrcd_reg.ida, ce->guc_lrcd_reg_idx); + ce->guc_lrcd_reg_idx = 0;; + } +} + +static void free_lrcd_reg_idx(struct intel_guc *guc, struct intel_context *ce) +{ + __free_lrcd_reg_idx(guc, ce); +} + +static int guc_lrcd_reg_init(struct intel_guc *guc) +{ + unsigned buffer_size = I915_GTT_PAGE_SIZE_4K * 16; + int ret; + + ida_init(&guc->lrcd_reg.ida); + + ret = alloc_lrcd_reg_idx_buffer(guc, buffer_size / + sizeof(struct guc_lrc_desc)); + if (unlikely(ret)) + return ret; + + /* Zero is reserved */ + ret = alloc_lrcd_reg_idx(guc, false); + GEM_BUG_ON(ret); + + return ret; +} + +static void guc_lrcd_reg_fini(struct intel_guc *guc) +{ + i915_vma_unpin_and_release(&guc->lrcd_reg.vma, + I915_VMA_RELEASE_MAP); + ida_destroy(&guc->lrcd_reg.ida); + guc->lrcd_reg.max_idx = 0; +} + static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) { struct intel_engine_cs *engine = ce->engine; @@ -1837,6 +1903,14 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) GEM_BUG_ON(i915_gem_object_is_lmem(guc->ct.vma->obj) != i915_gem_object_is_lmem(ce->ring->vma->obj)); + /* Allocate space for registeration */ + if (likely(!ce->guc_lrcd_reg_idx)) { + ret = alloc_lrcd_reg_idx(guc, !loop); + if (unlikely(ret < 0)) + return ret; + ce->guc_lrcd_reg_idx = ret; + } + context_registered = lrc_desc_registered(guc, desc_idx); rcu_read_lock(); @@ -1845,12 +1919,11 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) prio = ctx->sched.priority; rcu_read_unlock(); - reset_lrc_desc(guc, desc_idx); ret = set_lrc_desc_registered(guc, desc_idx, ce); if (unlikely(ret)) return ret; - desc = __get_lrc_desc(guc, desc_idx); + desc = __get_lrc_desc(guc, ce->guc_lrcd_reg_idx); desc->engine_class = engine_class_to_guc_class(engine->class); desc->engine_submit_mask = adjust_engine_mask(engine->class, engine->mask); @@ -1888,7 +1961,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) } spin_unlock_irqrestore(&ce->guc_state.lock, flags); if (unlikely(disabled)) { - reset_lrc_desc(guc, desc_idx); + clr_lrc_desc_registered(guc, desc_idx); return 0; /* Will get registered later */ } } @@ -1915,7 +1988,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) with_intel_runtime_pm(runtime_pm, wakeref) ret = register_context(ce, loop); if (unlikely(ret == -EBUSY)) - reset_lrc_desc(guc, desc_idx); + clr_lrc_desc_registered(guc, desc_idx); else if (unlikely(ret == -ENODEV)) ret = 0; /* Will get registered later */ } @@ -2176,6 +2249,8 @@ static void guc_context_ban(struct intel_context *ce, struct i915_request *rq) guc_id = prep_context_pending_disable(ce); spin_unlock_irqrestore(&ce->guc_state.lock, flags); + free_lrcd_reg_idx(guc, ce); + /* * In addition to disabling scheduling, set the preemption * timeout to the minimum value (1 us) so the banned context @@ -2269,6 +2344,7 @@ static void __guc_context_destroy(struct intel_context *ce) lrc_fini(ce); intel_context_fini(ce); + __free_lrcd_reg_idx(ce_to_guc(ce), ce); if (intel_engine_is_virtual(ce->engine)) { struct guc_virtual_engine *ve = @@ -2775,11 +2851,11 @@ static int guc_request_alloc(struct i915_request *rq) if (context_needs_lrc_desc_pin(ce, !!ret)) { ret = guc_lrc_desc_pin(ce, true); - if (unlikely(ret)) { /* unwind */ - if (ret == -EPIPE) { - disable_submission(guc); - goto out; /* GPU will be reset */ - } + if (unlikely(ret == -EBUSY)) + set_context_needs_register(ce); + else if (ret == -EPIPE) + disable_submission(guc); /* GPU will be reset */ + else if (unlikely(ret)) { /* unwind */ atomic_dec(&ce->guc_id_ref); unpin_guc_id(guc, ce, true); return ret; @@ -3405,6 +3481,8 @@ int intel_guc_sched_done_process_msg(struct intel_guc *guc, if (context_pending_enable(ce)) { clr_context_pending_enable(ce); + + free_lrcd_reg_idx(guc, ce); } else if (context_pending_disable(ce)) { bool banned; @@ -3673,6 +3751,8 @@ void intel_guc_submission_print_info(struct intel_guc *guc, atomic_read(&guc->outstanding_submission_g2h)); drm_printf(p, "GuC Number GuC IDs: %d\n", guc->num_guc_ids); drm_printf(p, "GuC Max Number GuC IDs: %d\n\n", guc->max_guc_ids); + drm_printf(p, "GuC max context registered: %u\n\n", + guc->lrcd_reg.max_idx); for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) gse_log_submission_info(guc->gse[i], p, i); From patchwork Tue Jul 20 20:57:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B755AC07E95 for ; Tue, 20 Jul 2021 20:40:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B2B360FF3 for ; Tue, 20 Jul 2021 20:40:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B2B360FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EC2D56E513; Tue, 20 Jul 2021 20:40:20 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id EEAE66E3DA; Tue, 20 Jul 2021 20:40:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421881" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421881" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906066" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 09/42] drm/i915/guc: Take GT PM ref when deregistering context Date: Tue, 20 Jul 2021 13:57:29 -0700 Message-Id: <20210720205802.39610-10-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Taking a PM reference to prevent intel_gt_wait_for_idle from short circuiting while a deregister context H2G is in flight. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_engine_pm.h | 5 + drivers/gpu/drm/i915/gt/intel_gt_pm.h | 13 +++ drivers/gpu/drm/i915/gt/uc/intel_guc.h | 4 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 98 +++++++++++++++---- 4 files changed, 101 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.h b/drivers/gpu/drm/i915/gt/intel_engine_pm.h index 70ea46d6cfb0..17a5028ea177 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.h @@ -16,6 +16,11 @@ intel_engine_pm_is_awake(const struct intel_engine_cs *engine) return intel_wakeref_is_active(&engine->wakeref); } +static inline void __intel_engine_pm_get(struct intel_engine_cs *engine) +{ + __intel_wakeref_get(&engine->wakeref); +} + static inline void intel_engine_pm_get(struct intel_engine_cs *engine) { intel_wakeref_get(&engine->wakeref); diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h index d0588d8aaa44..a17bf0d4592b 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h @@ -41,6 +41,19 @@ static inline void intel_gt_pm_put_async(struct intel_gt *gt) intel_wakeref_put_async(>->wakeref); } +#define with_intel_gt_pm(gt, tmp) \ + for (tmp = 1, intel_gt_pm_get(gt); tmp; \ + intel_gt_pm_put(gt), tmp = 0) +#define with_intel_gt_pm_async(gt, tmp) \ + for (tmp = 1, intel_gt_pm_get(gt); tmp; \ + intel_gt_pm_put_async(gt), tmp = 0) +#define with_intel_gt_pm_if_awake(gt, tmp) \ + for (tmp = intel_gt_pm_get_if_awake(gt); tmp; \ + intel_gt_pm_put(gt), tmp = 0) +#define with_intel_gt_pm_if_awake_async(gt, tmp) \ + for (tmp = intel_gt_pm_get_if_awake(gt); tmp; \ + intel_gt_pm_put_async(gt), tmp = 0) + static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt) { return intel_wakeref_wait_for_idle(>->wakeref); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 3198480f717c..4fe7fd4ce10e 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -61,6 +61,10 @@ struct intel_guc { struct list_head guc_id_list_no_ref; struct list_head guc_id_list_unpinned; + spinlock_t destroy_lock; + struct list_head destroyed_contexts; + struct work_struct destroy_worker; + bool submission_supported; bool submission_selected; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index e1a35f647025..f13558adc142 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -908,6 +908,7 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc) if (deregister) guc_signal_context_fence(ce); if (destroyed) { + intel_gt_pm_put_async(guc_to_gt(guc)); release_guc_id(guc, ce); __guc_context_destroy(ce); } @@ -1026,6 +1027,8 @@ static void guc_flush_submissions(struct intel_guc *guc) gse_flush_submissions(guc->gse[i]); } +static void guc_flush_destroyed_contexts(struct intel_guc *guc); + void intel_guc_submission_reset_prepare(struct intel_guc *guc) { int i; @@ -1043,6 +1046,7 @@ void intel_guc_submission_reset_prepare(struct intel_guc *guc) spin_unlock_irq(&guc_to_gt(guc)->irq_lock); guc_flush_submissions(guc); + guc_flush_destroyed_contexts(guc); /* * Handle any outstanding G2Hs before reset. Call IRQ handler directly @@ -1368,6 +1372,8 @@ static void retire_worker_func(struct work_struct *w) static int guc_lrcd_reg_init(struct intel_guc *guc); static void guc_lrcd_reg_fini(struct intel_guc *guc); +static void destroy_worker_func(struct work_struct *w); + /* * Set up the memory resources to be shared with the GuC (via the GGTT) * at firmware loading time. @@ -1390,6 +1396,10 @@ int intel_guc_submission_init(struct intel_guc *guc) INIT_LIST_HEAD(&guc->guc_id_list_unpinned); ida_init(&guc->guc_ids); + spin_lock_init(&guc->destroy_lock); + INIT_LIST_HEAD(&guc->destroyed_contexts); + INIT_WORK(&guc->destroy_worker, destroy_worker_func); + return 0; } @@ -1400,6 +1410,7 @@ void intel_guc_submission_fini(struct intel_guc *guc) if (!guc_submission_initialized(guc)) return; + guc_flush_destroyed_contexts(guc); guc_lrcd_reg_fini(guc); for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) { @@ -2326,11 +2337,29 @@ static void guc_context_sched_disable(struct intel_context *ce) static inline void guc_lrc_desc_unpin(struct intel_context *ce) { struct intel_guc *guc = ce_to_guc(ce); + struct intel_gt *gt = guc_to_gt(guc); + unsigned long flags; + bool disabled; + GEM_BUG_ON(!intel_gt_pm_is_awake(gt)); GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id)); GEM_BUG_ON(ce != __get_context(guc, ce->guc_id)); GEM_BUG_ON(context_enabled(ce)); + /* Seal race with Reset */ + spin_lock_irqsave(&ce->guc_state.lock, flags); + disabled = submission_disabled(guc); + if (likely(!disabled)) { + __intel_gt_pm_get(gt); + set_context_destroyed(ce); + } + spin_unlock_irqrestore(&ce->guc_state.lock, flags); + if (unlikely(disabled)) { + release_guc_id(guc, ce); + __guc_context_destroy(ce); + return; + } + clr_context_registered(ce); deregister_context(ce, ce->guc_id, true); } @@ -2359,12 +2388,51 @@ static void __guc_context_destroy(struct intel_context *ce) } } +static void guc_flush_destroyed_contexts(struct intel_guc *guc) +{ + struct intel_context *ce, *cn; + unsigned long flags; + spin_lock_irqsave(&guc->destroy_lock, flags); + list_for_each_entry_safe(ce, cn, + &guc->destroyed_contexts, guc_id_link) { + list_del_init(&ce->guc_id_link); + release_guc_id(guc, ce); + __guc_context_destroy(ce); + } + spin_unlock_irqrestore(&guc->destroy_lock, flags); +} + +static void deregister_destroyed_contexts(struct intel_guc *guc) +{ + struct intel_context *ce, *cn; + unsigned long flags; + + spin_lock_irqsave(&guc->destroy_lock, flags); + list_for_each_entry_safe(ce, cn, + &guc->destroyed_contexts, guc_id_link) { + list_del_init(&ce->guc_id_link); + spin_unlock_irqrestore(&guc->destroy_lock, flags); + guc_lrc_desc_unpin(ce); + spin_lock_irqsave(&guc->destroy_lock, flags); + } + spin_unlock_irqrestore(&guc->destroy_lock, flags); +} + +static void destroy_worker_func(struct work_struct *w) +{ + struct intel_guc *guc = + container_of(w, struct intel_guc, destroy_worker); + struct intel_gt *gt = guc_to_gt(guc); + int tmp; + + with_intel_gt_pm(gt, tmp) + deregister_destroyed_contexts(guc); +} + static void guc_context_destroy(struct kref *kref) { struct intel_context *ce = container_of(kref, typeof(*ce), ref); - struct intel_runtime_pm *runtime_pm = ce->engine->uncore->rpm; struct intel_guc *guc = ce_to_guc(ce); - intel_wakeref_t wakeref; unsigned long flags; bool disabled; @@ -2404,12 +2472,12 @@ static void guc_context_destroy(struct kref *kref) list_del_init(&ce->guc_id_link); spin_unlock_irqrestore(&guc->contexts_lock, flags); - /* Seal race with Reset */ - spin_lock_irqsave(&ce->guc_state.lock, flags); + /* Seal race with reset */ + spin_lock_irqsave(&guc->destroy_lock, flags); disabled = submission_disabled(guc); if (likely(!disabled)) - set_context_destroyed(ce); - spin_unlock_irqrestore(&ce->guc_state.lock, flags); + list_add_tail(&ce->guc_id_link, &guc->destroyed_contexts); + spin_unlock_irqrestore(&guc->destroy_lock, flags); if (unlikely(disabled)) { release_guc_id(guc, ce); __guc_context_destroy(ce); @@ -2417,20 +2485,11 @@ static void guc_context_destroy(struct kref *kref) } /* - * We defer GuC context deregistration until the context is destroyed - * in order to save on CTBs. With this optimization ideally we only need - * 1 CTB to register the context during the first pin and 1 CTB to - * deregister the context when the context is destroyed. Without this - * optimization, a CTB would be needed every pin & unpin. - * - * XXX: Need to acqiure the runtime wakeref as this can be triggered - * from context_free_worker when runtime wakeref is not held. - * guc_lrc_desc_unpin requires the runtime as a GuC register is written - * in H2G CTB to deregister the context. A future patch may defer this - * H2G CTB if the runtime wakeref is zero. + * We use a worker to issue the H2G to deregister the context as we can + * take the GT PM for the first time which isn't allowed from an atomic + * context. */ - with_intel_runtime_pm(runtime_pm, wakeref) - guc_lrc_desc_unpin(ce); + queue_work(system_unbound_wq, &guc->destroy_worker); } static int guc_context_alloc(struct intel_context *ce) @@ -3441,6 +3500,7 @@ int intel_guc_deregister_done_process_msg(struct intel_guc *guc, intel_context_put(ce); } else if (context_destroyed(ce)) { /* Context has been destroyed */ + intel_gt_pm_put_async(guc_to_gt(guc)); release_guc_id(guc, ce); __guc_context_destroy(ce); } From patchwork Tue Jul 20 20:57:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C0B8C636C8 for ; Tue, 20 Jul 2021 20:40:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5B75360FF3 for ; Tue, 20 Jul 2021 20:40:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B75360FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A3B3E6E4F1; Tue, 20 Jul 2021 20:40:19 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5E7CA6E4DE; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885367" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885367" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906067" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 10/42] drm/i915: Add GT PM unpark worker Date: Tue, 20 Jul 2021 13:57:30 -0700 Message-Id: <20210720205802.39610-11-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Sometimes it is desirable to queue work up for later if the GT PM isn't held and run that work on next GT PM unpark. Implemented with a list in the GT of all pending work, workqueues in the list, a callback to add a workqueue to the list, and finally a wakeref post_get callback that iterates / drains the list + queues the workqueues. First user of this is deregistration of GuC contexts. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_gt.c | 3 ++ drivers/gpu/drm/i915/gt/intel_gt_pm.c | 8 +++++ .../gpu/drm/i915/gt/intel_gt_pm_unpark_work.c | 35 +++++++++++++++++++ .../gpu/drm/i915/gt/intel_gt_pm_unpark_work.h | 32 +++++++++++++++++ drivers/gpu/drm/i915/gt/intel_gt_types.h | 3 ++ drivers/gpu/drm/i915/gt/uc/intel_guc.h | 3 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 +++++--- drivers/gpu/drm/i915/intel_wakeref.c | 5 +++ drivers/gpu/drm/i915/intel_wakeref.h | 1 + 9 files changed, 99 insertions(+), 5 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.c create mode 100644 drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.h diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index ceeb517ba259..172a859e43d2 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -29,6 +29,9 @@ void intel_gt_init_early(struct intel_gt *gt, struct drm_i915_private *i915) spin_lock_init(>->irq_lock); + spin_lock_init(>->pm_unpark_work_lock); + INIT_LIST_HEAD(>->pm_unpark_work_list); + INIT_LIST_HEAD(>->closed_vma); spin_lock_init(>->closed_lock); diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c index 463a6ae605a0..ebd4d22c7b19 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c @@ -93,6 +93,13 @@ static int __gt_unpark(struct intel_wakeref *wf) return 0; } +static void __gt_unpark_work_queue(struct intel_wakeref *wf) +{ + struct intel_gt *gt = container_of(wf, typeof(*gt), wakeref); + + intel_gt_pm_unpark_work_queue(gt); +} + static int __gt_park(struct intel_wakeref *wf) { struct intel_gt *gt = container_of(wf, typeof(*gt), wakeref); @@ -123,6 +130,7 @@ static int __gt_park(struct intel_wakeref *wf) static const struct intel_wakeref_ops wf_ops = { .get = __gt_unpark, + .post_get = __gt_unpark_work_queue, .put = __gt_park, }; diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.c b/drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.c new file mode 100644 index 000000000000..23162dbd0c35 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2021 Intel Corporation + */ + +#include "i915_drv.h" +#include "intel_runtime_pm.h" +#include "intel_gt_pm.h" + +void intel_gt_pm_unpark_work_queue(struct intel_gt *gt) +{ + struct intel_gt_pm_unpark_work *work, *next; + unsigned long flags; + + spin_lock_irqsave(>->pm_unpark_work_lock, flags); + list_for_each_entry_safe(work, next, + >->pm_unpark_work_list, link) { + list_del_init(&work->link); + queue_work(system_unbound_wq, &work->worker); + } + spin_unlock_irqrestore(>->pm_unpark_work_lock, flags); +} + +void intel_gt_pm_unpark_work_add(struct intel_gt *gt, + struct intel_gt_pm_unpark_work *work) +{ + unsigned long flags; + + spin_lock_irqsave(>->pm_unpark_work_lock, flags); + if (intel_gt_pm_is_awake(gt)) + queue_work(system_unbound_wq, &work->worker); + else if (list_empty(&work->link)) + list_add_tail(&work->link, >->pm_unpark_work_list); + spin_unlock_irqrestore(>->pm_unpark_work_lock, flags); +} diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.h b/drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.h new file mode 100644 index 000000000000..08e9011be023 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_unpark_work.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2021 Intel Corporation + */ + +#ifndef INTEL_GT_PM_UNPARK_WORK_H +#define INTEL_GT_PM_UNPARK_WORK_H + +#include +#include + +struct intel_gt; + +struct intel_gt_pm_unpark_work { + struct list_head link; + struct work_struct worker; +}; + +void intel_gt_pm_unpark_work_queue(struct intel_gt *gt); + +void intel_gt_pm_unpark_work_add(struct intel_gt *gt, + struct intel_gt_pm_unpark_work *work); + +static inline void +intel_gt_pm_unpark_work_init(struct intel_gt_pm_unpark_work *work, + work_func_t fn) +{ + INIT_LIST_HEAD(&work->link); + INIT_WORK(&work->worker, fn); +} + +#endif /* INTEL_GT_PM_UNPARK_WORK_H */ diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h index d93d578a4105..b681cb1bdb5f 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_types.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h @@ -91,6 +91,9 @@ struct intel_gt { struct intel_wakeref wakeref; atomic_t user_wakeref; + struct list_head pm_unpark_work_list; + spinlock_t pm_unpark_work_lock; + struct list_head closed_vma; spinlock_t closed_lock; /* guards the list of closed_vma */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 4fe7fd4ce10e..b299a6772823 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -18,6 +18,7 @@ #include "intel_uc_fw.h" #include "i915_utils.h" #include "i915_vma.h" +#include "gt/intel_gt_pm_unpark_work.h" struct __guc_ads_blob; @@ -63,7 +64,7 @@ struct intel_guc { spinlock_t destroy_lock; struct list_head destroyed_contexts; - struct work_struct destroy_worker; + struct intel_gt_pm_unpark_work destroy_worker; bool submission_supported; bool submission_selected; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index f13558adc142..2fdfcec3b5fa 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1397,8 +1397,9 @@ int intel_guc_submission_init(struct intel_guc *guc) ida_init(&guc->guc_ids); spin_lock_init(&guc->destroy_lock); + INIT_LIST_HEAD(&guc->destroyed_contexts); - INIT_WORK(&guc->destroy_worker, destroy_worker_func); + intel_gt_pm_unpark_work_init(&guc->destroy_worker, destroy_worker_func); return 0; } @@ -2420,13 +2421,18 @@ static void deregister_destroyed_contexts(struct intel_guc *guc) static void destroy_worker_func(struct work_struct *w) { + struct intel_gt_pm_unpark_work *destroy_worker = + container_of(w, struct intel_gt_pm_unpark_work, worker); struct intel_guc *guc = - container_of(w, struct intel_guc, destroy_worker); + container_of(destroy_worker, struct intel_guc, destroy_worker); struct intel_gt *gt = guc_to_gt(guc); int tmp; - with_intel_gt_pm(gt, tmp) + with_intel_gt_pm_if_awake(gt, tmp) deregister_destroyed_contexts(guc); + + if (!list_empty(&guc->destroyed_contexts)) + intel_gt_pm_unpark_work_add(gt, destroy_worker); } static void guc_context_destroy(struct kref *kref) @@ -2489,7 +2495,7 @@ static void guc_context_destroy(struct kref *kref) * take the GT PM for the first time which isn't allowed from an atomic * context. */ - queue_work(system_unbound_wq, &guc->destroy_worker); + intel_gt_pm_unpark_work_add(guc_to_gt(guc), &guc->destroy_worker); } static int guc_context_alloc(struct intel_context *ce) diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c index dfd87d082218..282fc4f312e3 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.c +++ b/drivers/gpu/drm/i915/intel_wakeref.c @@ -24,6 +24,8 @@ static void rpm_put(struct intel_wakeref *wf) int __intel_wakeref_get_first(struct intel_wakeref *wf) { + bool do_post = false; + /* * Treat get/put as different subclasses, as we may need to run * the put callback from under the shrinker and do not want to @@ -44,8 +46,11 @@ int __intel_wakeref_get_first(struct intel_wakeref *wf) } smp_mb__before_atomic(); /* release wf->count */ + do_post = true; } atomic_inc(&wf->count); + if (do_post && wf->ops->post_get) + wf->ops->post_get(wf); mutex_unlock(&wf->mutex); INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h index 545c8f277c46..ef7e6a698e8a 100644 --- a/drivers/gpu/drm/i915/intel_wakeref.h +++ b/drivers/gpu/drm/i915/intel_wakeref.h @@ -30,6 +30,7 @@ typedef depot_stack_handle_t intel_wakeref_t; struct intel_wakeref_ops { int (*get)(struct intel_wakeref *wf); + void (*post_get)(struct intel_wakeref *wf); int (*put)(struct intel_wakeref *wf); }; From patchwork Tue Jul 20 20:57:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B9F1C07E9B for ; Tue, 20 Jul 2021 20:41:15 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 36C6B60FEA for ; Tue, 20 Jul 2021 20:41:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36C6B60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9A5B96E55C; Tue, 20 Jul 2021 20:40:34 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 23B2B6E49F; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421883" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421883" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906068" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 11/42] drm/i915/guc: Take engine PM when a context is pinned with GuC submission Date: Tue, 20 Jul 2021 13:57:31 -0700 Message-Id: <20210720205802.39610-12-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Taking a PM reference to prevent intel_gt_wait_for_idle from short circuiting while a scheduling of user context could be enabled. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/Makefile | 1 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 36 +++++++++++++++++-- 2 files changed, 34 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index ab7679957623..29b722ad3f8b 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -102,6 +102,7 @@ gt-y += \ gt/intel_gt_clock_utils.o \ gt/intel_gt_irq.o \ gt/intel_gt_pm.o \ + gt/intel_gt_pm_unpark_work.o \ gt/intel_gt_pm_irq.o \ gt/intel_gt_requests.o \ gt/intel_gtt.o \ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 2fdfcec3b5fa..8c3332e24d59 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -2041,7 +2041,12 @@ static int guc_context_pre_pin(struct intel_context *ce, static int guc_context_pin(struct intel_context *ce, void *vaddr) { - return __guc_context_pin(ce, ce->engine, vaddr); + int ret = __guc_context_pin(ce, ce->engine, vaddr); + + if (likely(!ret && !intel_context_is_barrier(ce))) + intel_engine_pm_get(ce->engine); + + return ret; } static void guc_context_unpin(struct intel_context *ce) @@ -2052,6 +2057,9 @@ static void guc_context_unpin(struct intel_context *ce) unpin_guc_id(guc, ce, true); lrc_unpin(ce); + + if (likely(!intel_context_is_barrier(ce))) + intel_engine_pm_put(ce->engine); } static void guc_context_post_unpin(struct intel_context *ce) @@ -2971,8 +2979,30 @@ static int guc_virtual_context_pre_pin(struct intel_context *ce, static int guc_virtual_context_pin(struct intel_context *ce, void *vaddr) { struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0); + int ret = __guc_context_pin(ce, engine, vaddr); + intel_engine_mask_t tmp, mask = ce->engine->mask; + + if (likely(!ret)) + for_each_engine_masked(engine, ce->engine->gt, mask, tmp) + intel_engine_pm_get(engine); + + return ret; +} + +static void guc_virtual_context_unpin(struct intel_context *ce) +{ + intel_engine_mask_t tmp, mask = ce->engine->mask; + struct intel_engine_cs *engine; + struct intel_guc *guc = ce_to_guc(ce); - return __guc_context_pin(ce, engine, vaddr); + GEM_BUG_ON(context_enabled(ce)); + GEM_BUG_ON(intel_context_is_barrier(ce)); + + unpin_guc_id(guc, ce, true); + lrc_unpin(ce); + + for_each_engine_masked(engine, ce->engine->gt, mask, tmp) + intel_engine_pm_put(engine); } static void guc_virtual_context_enter(struct intel_context *ce) @@ -3009,7 +3039,7 @@ static const struct intel_context_ops virtual_guc_context_ops = { .pre_pin = guc_virtual_context_pre_pin, .pin = guc_virtual_context_pin, - .unpin = guc_context_unpin, + .unpin = guc_virtual_context_unpin, .post_unpin = guc_context_post_unpin, .ban = guc_context_ban, From patchwork Tue Jul 20 20:57:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31CF3C636C9 for ; Tue, 20 Jul 2021 20:41:04 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0503760FEA for ; Tue, 20 Jul 2021 20:41:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0503760FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 12AF26E52F; Tue, 20 Jul 2021 20:40:28 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8B7426E512; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885368" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885368" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906069" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 12/42] drm/i915/guc: Don't call switch_to_kernel_context with GuC submission Date: Tue, 20 Jul 2021 13:57:32 -0700 Message-Id: <20210720205802.39610-13-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Calling switch_to_kernel_context isn't needed if the engine PM reference is taken while all contexts are pinned. By not calling switch_to_kernel_context we save on issuing a request to the engine. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index 1f07ac4e0672..58099de6bf07 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -162,6 +162,10 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine) unsigned long flags; bool result = true; + /* No need to switch_to_kernel_context if GuC submission */ + if (intel_engine_uses_guc(engine)) + return true; + /* GPU is pointing to the void, as good as in the kernel context. */ if (intel_gt_is_wedged(engine->gt)) return true; From patchwork Tue Jul 20 20:57:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89446C636C9 for ; Tue, 20 Jul 2021 20:41:33 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5F23260FF3 for ; Tue, 20 Jul 2021 20:41:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F23260FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D6B346E5AB; Tue, 20 Jul 2021 20:40:47 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 005CB6E51B; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909009" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909009" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906071" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 13/42] drm/i915/guc: Selftest for GuC flow control Date: Tue, 20 Jul 2021 13:57:33 -0700 Message-Id: <20210720205802.39610-14-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add 5 selftests for hard (from user space) to recreate flow conditions. Test listed below: 1. A test to verify that the number of guc_ids can be exhausted and all submissions still complete. 2. A test to verify that the flow control state machine can recover from a full GPU reset. 3. A teset to verify that the lrcd registration slots can be exhausted and all submissions still complete. 4. A test to verify that the H2G channel can deadlock and a full GPU reset recovers the system. 5. A test to stress to CTB channel but submitting to lots of contexts and then immediately destroy the contexts. Tests 1, 2, and 3 also ensure when the flow control is triggered by unready requests those unready requests do not DoS ready requests. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/uc/intel_guc.h | 6 + drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 43 +- drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h | 9 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 16 + .../i915/gt/uc/intel_guc_submission_types.h | 2 + .../i915/gt/uc/selftest_guc_flow_control.c | 581 ++++++++++++++++++ .../drm/i915/selftests/i915_live_selftests.h | 1 + .../i915/selftests/intel_scheduler_helpers.c | 12 + .../i915/selftests/intel_scheduler_helpers.h | 1 + 9 files changed, 661 insertions(+), 10 deletions(-) create mode 100644 drivers/gpu/drm/i915/gt/uc/selftest_guc_flow_control.c diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index b299a6772823..466cc6227503 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -103,6 +103,12 @@ struct intel_guc { /* To serialize the intel_guc_send actions */ struct mutex send_mutex; + + I915_SELFTEST_DECLARE(bool gse_hang_expected;) + I915_SELFTEST_DECLARE(bool deadlock_expected;) + I915_SELFTEST_DECLARE(bool bad_desc_expected;) + I915_SELFTEST_DECLARE(bool inject_bad_sched_disable;) + I915_SELFTEST_DECLARE(bool inject_corrupt_h2g;) }; static inline struct intel_guc *log_to_guc(struct intel_guc_log *log) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c index 92976d205478..59adf06e821b 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c @@ -3,7 +3,6 @@ * Copyright © 2016-2019 Intel Corporation */ -#include #include #include #include @@ -414,8 +413,9 @@ static int ct_write(struct intel_guc_ct *ct, u32 *cmds = ctb->cmds; unsigned int i; - if (unlikely(desc->status)) - goto corrupted; + if (!I915_SELFTEST_ONLY(ct_to_guc(ct)->deadlock_expected)) + if (unlikely(desc->status)) + goto corrupted; GEM_BUG_ON(tail > size); @@ -443,6 +443,15 @@ static int ct_write(struct intel_guc_ct *ct, FIELD_PREP(GUC_CTB_MSG_0_NUM_DWORDS, len) | FIELD_PREP(GUC_CTB_MSG_0_FENCE, fence); +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) + if (ct_to_guc(ct)->inject_corrupt_h2g) { + header = FIELD_PREP(GUC_CTB_MSG_0_FORMAT, 3) | + FIELD_PREP(GUC_CTB_MSG_0_NUM_DWORDS, len + 5) | + FIELD_PREP(GUC_CTB_MSG_0_FENCE, 0xdead); + ct_to_guc(ct)->inject_corrupt_h2g = false; + } +#endif + type = (flags & INTEL_GUC_CT_SEND_NB) ? GUC_HXG_TYPE_EVENT : GUC_HXG_TYPE_REQUEST; hxg = FIELD_PREP(GUC_HXG_MSG_0_TYPE, type) | @@ -481,8 +490,12 @@ static int ct_write(struct intel_guc_ct *ct, return 0; corrupted: - CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n", - desc->head, desc->tail, desc->status); + if (I915_SELFTEST_ONLY(ct_to_guc(ct)->bad_desc_expected)) + CT_DEBUG(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n", + desc->head, desc->tail, desc->status); + else + CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n", + desc->head, desc->tail, desc->status); ctb->broken = true; return -EPIPE; } @@ -539,9 +552,18 @@ static inline bool ct_deadlocked(struct intel_guc_ct *ct) struct guc_ct_buffer_desc *send = ct->ctbs.send.desc; struct guc_ct_buffer_desc *recv = ct->ctbs.send.desc; - CT_ERROR(ct, "Communication stalled for %lld ms, desc status=%#x,%#x\n", - ktime_ms_delta(ktime_get(), ct->stall_time), - send->status, recv->status); + /* + * CI doesn't like error messages, demote to debug if deadlock was + * intentionally hit. + */ + if (I915_SELFTEST_ONLY(ct_to_guc(ct)->deadlock_expected)) + CT_DEBUG(ct, "Communication stalled for %lld ms, desc status=%#x,%#x\n", + ktime_ms_delta(ktime_get(), ct->stall_time), + send->status, recv->status); + else + CT_ERROR(ct, "Communication stalled for %lld ms, desc status=%#x,%#x\n", + ktime_ms_delta(ktime_get(), ct->stall_time), + send->status, recv->status); ct->ctbs.send.broken = true; } @@ -764,8 +786,9 @@ int intel_guc_ct_send(struct intel_guc_ct *ct, const u32 *action, u32 len, return -ENODEV; } - if (unlikely(ct->ctbs.send.broken)) - return -EPIPE; + if (!I915_SELFTEST_ONLY(ct_to_guc(ct)->deadlock_expected)) + if (unlikely(ct->ctbs.send.broken)) + return -EPIPE; if (flags & INTEL_GUC_CT_SEND_NB) return ct_send_nb(ct, action, len, flags); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h index 7b34026d264a..bafc511771f1 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h @@ -6,6 +6,7 @@ #ifndef _INTEL_GUC_CT_H_ #define _INTEL_GUC_CT_H_ +#include #include #include #include @@ -115,4 +116,12 @@ void intel_guc_ct_event_handler(struct intel_guc_ct *ct); void intel_guc_ct_print_info(struct intel_guc_ct *ct, struct drm_printer *p); +static inline bool intel_guc_ct_is_recv_buffer_empty(struct intel_guc_ct *ct) +{ + struct intel_guc_ct_buffer *ctb = &ct->ctbs.recv; + + return atomic_read(&ctb->space) == + (CIRC_SPACE(0, 0, ctb->size) - ctb->resv_space); +} + #endif /* _INTEL_GUC_CT_H_ */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 8c3332e24d59..65f59497e7ef 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -818,6 +818,7 @@ static int gse_dequeue_one_context(struct guc_submit_engine *gse) GEM_WARN_ON(ret); /* Unexpected */ goto deadlk; } + I915_SELFTEST_DECLARE(++gse->tasklets_submit_count;) } /* @@ -2092,7 +2093,15 @@ static void __guc_context_sched_disable(struct intel_guc *guc, GUC_CONTEXT_DISABLE }; +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) + if (guc->inject_bad_sched_disable && + guc_id == GUC_INVALID_LRC_ID) + guc->inject_bad_sched_disable = false; + else + GEM_BUG_ON(guc_id == GUC_INVALID_LRC_ID); +#else GEM_BUG_ON(guc_id == GUC_INVALID_LRC_ID); +#endif trace_intel_context_sched_disable(ce); @@ -2739,6 +2748,9 @@ static void retire_worker_sched_disable(struct guc_submit_engine *gse, guc_id = prep_context_pending_disable(ce); spin_unlock_irqrestore(&ce->guc_state.lock, flags); + if (I915_SELFTEST_ONLY(guc->inject_bad_sched_disable)) + guc_id = GUC_INVALID_LRC_ID; + with_intel_runtime_pm(runtime_pm, wakeref) __guc_context_sched_disable(guc, ce, guc_id); @@ -3991,3 +4003,7 @@ bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve) return false; } + +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) +#include "selftest_guc_flow_control.c" +#endif diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h index 0c224ab18c02..7069b7248f55 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h @@ -47,6 +47,8 @@ struct guc_submit_engine { STALL_MOVE_LRC_TAIL, STALL_ADD_REQUEST, } submission_stall_reason; + + I915_SELFTEST_DECLARE(u64 tasklets_submit_count;) }; #endif diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc_flow_control.c b/drivers/gpu/drm/i915/gt/uc/selftest_guc_flow_control.c new file mode 100644 index 000000000000..f31ab2674b2b --- /dev/null +++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc_flow_control.c @@ -0,0 +1,581 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright �� 2021 Intel Corporation + */ + +#include "selftests/igt_spinner.h" +#include "selftests/igt_reset.h" +#include "selftests/intel_scheduler_helpers.h" +#include "gt/intel_engine_heartbeat.h" +#include "gem/selftests/mock_context.h" + +static int __request_add_spin(struct i915_request *rq, struct igt_spinner *spin) +{ + int err = 0; + + i915_request_get(rq); + i915_request_add(rq); + if (spin && !igt_wait_for_spinner(spin, rq)) + err = -ETIMEDOUT; + + return err; +} + +static struct i915_request *nop_kernel_request(struct intel_engine_cs *engine) +{ + struct i915_request *rq; + + rq = intel_engine_create_kernel_request(engine); + if (IS_ERR(rq)) + return rq; + + i915_request_get(rq); + i915_request_add(rq); + + return rq; +} + +static struct i915_request *nop_user_request(struct intel_context *ce, + struct i915_request *from) +{ + struct i915_request *rq; + int ret; + + rq = intel_context_create_request(ce); + if (IS_ERR(rq)) + return rq; + + if (from) { + ret = i915_sw_fence_await_dma_fence(&rq->submit, + &from->fence, 0, + I915_FENCE_GFP); + if (ret < 0) { + i915_request_put(rq); + return ERR_PTR(ret); + } + } + + i915_request_get(rq); + i915_request_add(rq); + + return rq; +} + +static int nop_request_wait(struct intel_engine_cs *engine, bool kernel, + bool flow_control) +{ + struct i915_gpu_error *global = &engine->gt->i915->gpu_error; + unsigned int reset_count = i915_reset_count(global); + struct intel_guc *guc = &engine->gt->uc.guc; + struct guc_submit_engine *gse = guc->gse[GUC_SUBMIT_ENGINE_SINGLE_LRC]; + u64 tasklets_submit_count = gse->tasklets_submit_count; + struct intel_context *ce; + struct i915_request *nop; + int ret; + + if (kernel) { + nop = nop_kernel_request(engine); + } else { + ce = intel_context_create(engine); + if (IS_ERR(ce)) + return PTR_ERR(ce); + nop = nop_user_request(ce, NULL); + intel_context_put(ce); + } + if (IS_ERR(nop)) + return PTR_ERR(nop); + + ret = intel_selftest_wait_for_rq(nop); + i915_request_put(nop); + if (ret) + return ret; + + if (!flow_control && + gse->tasklets_submit_count != tasklets_submit_count) { + pr_err("Flow control for single-lrc unexpectedly kicked in\n"); + ret = -EINVAL; + } + + if (flow_control && + gse->tasklets_submit_count == tasklets_submit_count) { + pr_err("Flow control for single-lrc did not kick in\n"); + ret = -EINVAL; + } + + if (i915_reset_count(global) != reset_count) { + pr_err("Unexpected GPU reset during single-lrc submit\n"); + ret = -EINVAL; + } + + return ret; +} + +#define NUM_GUC_ID 256 +#define NUM_CONTEXT 1024 +#define NUM_RQ_PER_CONTEXT 2 +#define HEARTBEAT_INTERVAL 1500 + +static int __intel_guc_flow_control_guc(void *arg, bool limit_guc_ids, bool hang) +{ + struct intel_gt *gt = arg; + struct intel_guc *guc = >->uc.guc; + struct guc_submit_engine *gse = guc->gse[GUC_SUBMIT_ENGINE_SINGLE_LRC]; + struct intel_context **contexts; + int ret = 0; + int i, j, k; + struct intel_context *ce; + struct igt_spinner spin; + struct i915_request *spin_rq = NULL, *last = NULL; + intel_wakeref_t wakeref; + struct intel_engine_cs *engine; + struct i915_gpu_error *global = >->i915->gpu_error; + unsigned int reset_count; + u64 tasklets_submit_count = gse->tasklets_submit_count; + u32 old_beat; + + contexts = kmalloc(sizeof(*contexts) * NUM_CONTEXT, GFP_KERNEL); + if (!contexts) { + pr_err("Context array allocation failed\n"); + return -ENOMEM; + } + + wakeref = intel_runtime_pm_get(gt->uncore->rpm); + + if (limit_guc_ids) + guc->num_guc_ids = NUM_GUC_ID; + + ce = intel_context_create(intel_selftest_find_any_engine(gt)); + if (IS_ERR(ce)) { + ret = PTR_ERR(ce); + pr_err("Failed to create context: %d\n", ret); + goto err; + } + + reset_count = i915_reset_count(global); + engine = ce->engine; + + old_beat = engine->props.heartbeat_interval_ms; + if (hang) { + ret = intel_engine_set_heartbeat(engine, HEARTBEAT_INTERVAL); + if (ret) { + pr_err("Failed to boost heartbeat interval: %d\n", ret); + goto err; + } + } + + /* Create spinner to block requests in below loop */ + ret = igt_spinner_init(&spin, engine->gt); + if (ret) { + pr_err("Failed to create spinner: %d\n", ret); + goto err_heartbeat; + } + spin_rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK); + intel_context_put(ce); + if (IS_ERR(spin_rq)) { + ret = PTR_ERR(spin_rq); + pr_err("Failed to create spinner request: %d\n", ret); + goto err_heartbeat; + } + ret = __request_add_spin(spin_rq, &spin); + if (ret) { + pr_err("Failed to add Spinner request: %d\n", ret); + goto err_spin_rq; + } + + /* + * Create of lot of requests in a loop to trigger the flow control state + * machine. Using a three level loop as it is interesting to hit flow + * control with more than 1 request on each context in a row and also + * interleave requests with other contexts. + */ + for (i = 0; i < NUM_RQ_PER_CONTEXT; ++i) { + for (j = 0; j < NUM_CONTEXT; ++j) { + for (k = 0; k < NUM_RQ_PER_CONTEXT; ++k) { + bool first_pass = !i && !k; + + if (last) + i915_request_put(last); + last = NULL; + + if (first_pass) + contexts[j] = intel_context_create(engine); + ce = contexts[j]; + + if (IS_ERR(ce)) { + ret = PTR_ERR(ce); + pr_err("Failed to create context, %d,%d,%d: %d\n", + i, j, k, ret); + goto err_spin_rq; + } + + last = nop_user_request(ce, spin_rq); + if (first_pass) + intel_context_put(ce); + if (IS_ERR(last)) { + ret = PTR_ERR(last); + pr_err("Failed to create request, %d,%d,%d: %d\n", + i, j, k, ret); + goto err_spin_rq; + } + } + } + } + + /* Verify GuC submit engine state */ + if (limit_guc_ids && !guc_ids_exhausted(gse)) { + pr_err("guc_ids not exhausted\n"); + ret = -EINVAL; + goto err_spin_rq; + } + if (!limit_guc_ids && guc_ids_exhausted(gse)) { + pr_err("guc_ids exhausted\n"); + ret = -EINVAL; + goto err_spin_rq; + } + + /* Ensure no DoS from unready requests */ + ret = nop_request_wait(engine, false, true); + if (ret < 0) { + pr_err("User NOP request DoS: %d\n", ret); + goto err_spin_rq; + } + + /* Inject hang in flow control state machine */ + if (hang) { + guc->gse_hang_expected = true; + guc->inject_bad_sched_disable = true; + } + + /* Release blocked requests */ + igt_spinner_end(&spin); + ret = intel_selftest_wait_for_rq(spin_rq); + if (ret) { + pr_err("Spin request failed to complete: %d\n", ret); + goto err_spin_rq; + } + i915_request_put(spin_rq); + igt_spinner_fini(&spin); + spin_rq = NULL; + + /* Wait for last request / GT to idle */ + ret = i915_request_wait(last, 0, hang ? HZ * 30 : HZ * 10); + if (ret < 0) { + pr_err("Last request failed to complete: %d\n", ret); + goto err_spin_rq; + } + i915_request_put(last); + last = NULL; + ret = intel_gt_wait_for_idle(gt, HZ * 5); + if (ret < 0) { + pr_err("GT failed to idle: %d\n", ret); + goto err_spin_rq; + } + + /* Check state after idle */ + if (guc_ids_exhausted(gse)) { + pr_err("guc_ids exhausted after last request signaled\n"); + ret = -EINVAL; + goto err_spin_rq; + } + if (hang) { + if (i915_reset_count(global) == reset_count) { + pr_err("Failed to record a GPU reset\n"); + ret = -EINVAL; + goto err_spin_rq; + } + } else { + if (i915_reset_count(global) != reset_count) { + pr_err("Unexpected GPU reset\n"); + ret = -EINVAL; + goto err_spin_rq; + } + if (gse->tasklets_submit_count == tasklets_submit_count) { + pr_err("Flow control failed to kick in\n"); + ret = -EINVAL; + goto err_spin_rq; + } + } + + /* Verify requests can be submitted after flow control */ + ret = nop_request_wait(engine, true, false); + if (ret < 0) { + pr_err("Kernel NOP failed to complete: %d\n", ret); + goto err_spin_rq; + } + ret = nop_request_wait(engine, false, false); + if (ret < 0) { + pr_err("User NOP failed to complete: %d\n", ret); + goto err_spin_rq; + } + +err_spin_rq: + if (spin_rq) { + igt_spinner_end(&spin); + intel_selftest_wait_for_rq(spin_rq); + i915_request_put(spin_rq); + igt_spinner_fini(&spin); + intel_gt_wait_for_idle(gt, HZ * 5); + } +err_heartbeat: + if (last) + i915_request_put(last); + intel_engine_set_heartbeat(engine, old_beat); +err: + intel_runtime_pm_put(gt->uncore->rpm, wakeref); + guc->num_guc_ids = guc->max_guc_ids; + guc->gse_hang_expected = false; + guc->inject_bad_sched_disable = false; + kfree(contexts); + + return ret; +} + +static int intel_guc_flow_control_guc_ids(void *arg) +{ + return __intel_guc_flow_control_guc(arg, true, false); +} + +static int intel_guc_flow_control_lrcd_reg(void *arg) +{ + return __intel_guc_flow_control_guc(arg, false, false); +} + +static int intel_guc_flow_control_hang_state_machine(void *arg) +{ + return __intel_guc_flow_control_guc(arg, true, true); +} + +#define NUM_RQ_STRESS_CTBS 0x4000 +static int intel_guc_flow_control_stress_ctbs(void *arg) +{ + struct intel_gt *gt = arg; + int ret = 0; + int i; + struct intel_context *ce; + struct i915_request *last = NULL, *rq; + intel_wakeref_t wakeref; + struct intel_engine_cs *engine; + struct i915_gpu_error *global = >->i915->gpu_error; + unsigned int reset_count; + struct intel_guc *guc = >->uc.guc; + struct intel_guc_ct_buffer *ctb = &guc->ct.ctbs.recv; + + wakeref = intel_runtime_pm_get(gt->uncore->rpm); + + reset_count = i915_reset_count(global); + engine = intel_selftest_find_any_engine(gt); + + /* + * Create a bunch of requests, and then idle the GT which will create a + * lot of H2G / G2H traffic. + */ + for (i = 0; i < NUM_RQ_STRESS_CTBS; ++i) { + ce = intel_context_create(engine); + if (IS_ERR(ce)) { + ret = PTR_ERR(ce); + pr_err("Failed to create context, %d: %d\n", i, ret); + goto err; + } + + rq = nop_user_request(ce, NULL); + intel_context_put(ce); + + if (IS_ERR(rq)) { + ret = PTR_ERR(rq); + pr_err("Failed to create request, %d: %d\n", i, ret); + goto err; + } + + if (last) + i915_request_put(last); + last = rq; + } + + ret = i915_request_wait(last, 0, HZ * 10); + if (ret < 0) { + pr_err("Last request failed to complete: %d\n", ret); + goto err; + } + i915_request_put(last); + last = NULL; + + ret = intel_gt_wait_for_idle(gt, HZ * 10); + if (ret < 0) { + pr_err("GT failed to idle: %d\n", ret); + goto err; + } + + if (i915_reset_count(global) != reset_count) { + pr_err("Unexpected GPU reset\n"); + ret = -EINVAL; + goto err; + } + + ret = nop_request_wait(engine, true, false); + if (ret < 0) { + pr_err("Kernel NOP failed to complete: %d\n", ret); + goto err; + } + + ret = nop_request_wait(engine, false, false); + if (ret < 0) { + pr_err("User NOP failed to complete: %d\n", ret); + goto err; + } + + ret = intel_gt_wait_for_idle(gt, HZ); + if (ret < 0) { + pr_err("GT failed to idle: %d\n", ret); + goto err; + } + + ret = wait_for(intel_guc_ct_is_recv_buffer_empty(&guc->ct), HZ); + if (ret) { + pr_err("Recv CTB not expected value=%d,%d outstanding_ctb=%d\n", + atomic_read(&ctb->space), + CIRC_SPACE(0, 0, ctb->size) - ctb->resv_space, + atomic_read(&guc->outstanding_submission_g2h)); + ret = -EINVAL; + goto err; + } + +err: + if (last) + i915_request_put(last); + intel_runtime_pm_put(gt->uncore->rpm, wakeref); + + return ret; +} + +#define NUM_RQ_DEADLOCK 2048 +static int __intel_guc_flow_control_deadlock_h2g(void *arg, bool bad_desc) +{ + struct intel_gt *gt = arg; + struct intel_guc *guc = >->uc.guc; + int ret = 0; + int i; + struct intel_context *ce; + struct i915_request *last = NULL, *rq; + intel_wakeref_t wakeref; + struct intel_engine_cs *engine; + struct i915_gpu_error *global = >->i915->gpu_error; + unsigned int reset_count; + u32 old_beat; + + wakeref = intel_runtime_pm_get(gt->uncore->rpm); + + reset_count = i915_reset_count(global); + engine = intel_selftest_find_any_engine(gt); + + old_beat = engine->props.heartbeat_interval_ms; + ret = intel_engine_set_heartbeat(engine, HEARTBEAT_INTERVAL); + if (ret) { + pr_err("Failed to boost heartbeat interval: %d\n", ret); + goto err; + } + + guc->inject_corrupt_h2g = true; + if (bad_desc) + guc->bad_desc_expected = true; + else + guc->deadlock_expected = true; + + for (i = 0; i < NUM_RQ_DEADLOCK; ++i) { + ce = intel_context_create(engine); + if (IS_ERR(ce)) { + ret = PTR_ERR(ce); + pr_err("Failed to create context, %d: %d\n", i, ret); + goto err_heartbeat; + } + + rq = nop_user_request(ce, NULL); + intel_context_put(ce); + + if (IS_ERR(rq)) { + ret = PTR_ERR(rq); + pr_err("Failed to create request, %d: %d\n", i, ret); + goto err_heartbeat; + } + + if (last) + i915_request_put(last); + last = rq; + } + + pr_debug("Number requests before deadlock: %d\n", i); + + ret = i915_request_wait(last, 0, HZ * 5); + if (ret < 0) { + pr_err("Last request failed to complete: %d\n", ret); + goto err_heartbeat; + } + i915_request_put(last); + last = NULL; + + ret = intel_gt_wait_for_idle(gt, HZ * 10); + if (ret < 0) { + pr_err("GT failed to idle: %d\n", ret); + goto err_heartbeat; + } + + if (i915_reset_count(global) == reset_count) { + pr_err("Failed to record a GPU reset\n"); + ret = -EINVAL; + goto err_heartbeat; + } + + ret = nop_request_wait(engine, true, false); + if (ret < 0) { + pr_err("Kernel NOP failed to complete: %d\n", ret); + goto err_heartbeat; + } + + ret = nop_request_wait(engine, false, false); + if (ret < 0) { + pr_err("User NOP failed to complete: %d\n", ret); + goto err_heartbeat; + } + +err_heartbeat: + if (last) + i915_request_put(last); + intel_engine_set_heartbeat(engine, old_beat); +err: + intel_runtime_pm_put(gt->uncore->rpm, wakeref); + guc->inject_corrupt_h2g = false; + guc->deadlock_expected = false; + guc->bad_desc_expected = false; + + return ret; +} + +static int intel_guc_flow_control_deadlock_h2g(void *arg) +{ + return __intel_guc_flow_control_deadlock_h2g(arg, false); +} + +static int intel_guc_flow_control_bad_desc_h2g(void *arg) +{ + return __intel_guc_flow_control_deadlock_h2g(arg, true); +} + +int intel_guc_flow_control(struct drm_i915_private *i915) +{ + static const struct i915_subtest tests[] = { + SUBTEST(intel_guc_flow_control_stress_ctbs), + SUBTEST(intel_guc_flow_control_guc_ids), + SUBTEST(intel_guc_flow_control_lrcd_reg), + SUBTEST(intel_guc_flow_control_hang_state_machine), + SUBTEST(intel_guc_flow_control_deadlock_h2g), + SUBTEST(intel_guc_flow_control_bad_desc_h2g), + }; + struct intel_gt *gt = &i915->gt; + + if (intel_gt_is_wedged(gt)) + return 0; + + if (!intel_uc_uses_guc_submission(>->uc)) + return 0; + + return intel_gt_live_subtests(tests, gt); +} diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h index e2fd1b61af71..d9bd732b741a 100644 --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h @@ -47,5 +47,6 @@ selftest(hangcheck, intel_hangcheck_live_selftests) selftest(execlists, intel_execlists_live_selftests) selftest(ring_submission, intel_ring_submission_live_selftests) selftest(perf, i915_perf_live_selftests) +selftest(guc_flow_control, intel_guc_flow_control) /* Here be dragons: keep last to run last! */ selftest(late_gt_pm, intel_gt_pm_late_selftests) diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c index ebd6d69b3315..f83c8c6c0d9b 100644 --- a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c +++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c @@ -15,6 +15,18 @@ #define REDUCED_PREEMPT 10 #define WAIT_FOR_RESET_TIME 10000 +struct intel_engine_cs *intel_selftest_find_any_engine(struct intel_gt *gt) +{ + struct intel_engine_cs *engine; + enum intel_engine_id id; + + for_each_engine(engine, gt, id) + return engine; + + pr_err("No valid engine found!\n"); + return NULL; +} + int intel_selftest_modify_policy(struct intel_engine_cs *engine, struct intel_selftest_saved_policy *saved, u32 modify_type) diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h index 050bc5a8ba8b..01ba01603837 100644 --- a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h +++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.h @@ -31,5 +31,6 @@ int intel_selftest_modify_policy(struct intel_engine_cs *engine, int intel_selftest_restore_policy(struct intel_engine_cs *engine, struct intel_selftest_saved_policy *saved); int intel_selftest_wait_for_rq( struct i915_request *rq); +struct intel_engine_cs *intel_selftest_find_any_engine(struct intel_gt *gt); #endif From patchwork Tue Jul 20 20:57:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 283CAC636C8 for ; Tue, 20 Jul 2021 20:41:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E60D260FEA for ; Tue, 20 Jul 2021 20:40:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E60D260FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8A5E36E529; Tue, 20 Jul 2021 20:40:27 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id A05AA6E3DA; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885369" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885369" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906072" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 14/42] drm/i915: Add logical engine mapping Date: Tue, 20 Jul 2021 13:57:34 -0700 Message-Id: <20210720205802.39610-15-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add logical engine mapping. This is required for split-frame, as workloads need to be placed on engines in a logically contiguous manner. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 60 ++++++++++++++++--- drivers/gpu/drm/i915/gt/intel_engine_types.h | 1 + .../drm/i915/gt/intel_execlists_submission.c | 1 + drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c | 2 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 21 +------ 5 files changed, 56 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index 51a0d860d551..1ce112fefcf5 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -260,7 +260,8 @@ static void nop_irq_handler(struct intel_engine_cs *engine, u16 iir) GEM_DEBUG_WARN_ON(iir); } -static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id) +static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id, + u8 logical_instance) { const struct engine_info *info = &intel_engines[id]; struct drm_i915_private *i915 = gt->i915; @@ -303,6 +304,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id) engine->class = info->class; engine->instance = info->instance; + engine->logical_mask = BIT(logical_instance); __sprint_engine_name(engine); engine->props.heartbeat_interval_ms = @@ -516,6 +518,37 @@ static intel_engine_mask_t init_engine_mask(struct intel_gt *gt) return info->engine_mask; } +static void populate_logical_ids(struct intel_gt *gt, u8 *logical_ids, + u8 class, const u8 *map, u8 num_instances) +{ + int i, j; + u8 current_logical_id = 0; + + for (j = 0; j < num_instances; ++j) { + for (i = 0; i < ARRAY_SIZE(intel_engines); ++i) { + if (!HAS_ENGINE(gt, i) || + intel_engines[i].class != class) + continue; + + if (intel_engines[i].instance == map[j]) { + logical_ids[intel_engines[i].instance] = + current_logical_id++; + break; + } + } + } +} + +static void setup_logical_ids(struct intel_gt *gt, u8 *logical_ids, u8 class) +{ + int i; + u8 map[MAX_ENGINE_INSTANCE + 1]; + + for (i = 0; i < MAX_ENGINE_INSTANCE + 1; ++i) + map[i] = i; + populate_logical_ids(gt, logical_ids, class, map, ARRAY_SIZE(map)); +} + /** * intel_engines_init_mmio() - allocate and prepare the Engine Command Streamers * @gt: pointer to struct intel_gt @@ -527,7 +560,8 @@ int intel_engines_init_mmio(struct intel_gt *gt) struct drm_i915_private *i915 = gt->i915; const unsigned int engine_mask = init_engine_mask(gt); unsigned int mask = 0; - unsigned int i; + unsigned int i, class; + u8 logical_ids[MAX_ENGINE_INSTANCE + 1]; int err; drm_WARN_ON(&i915->drm, engine_mask == 0); @@ -537,15 +571,23 @@ int intel_engines_init_mmio(struct intel_gt *gt) if (i915_inject_probe_failure(i915)) return -ENODEV; - for (i = 0; i < ARRAY_SIZE(intel_engines); i++) { - if (!HAS_ENGINE(gt, i)) - continue; + for (class = 0; class < MAX_ENGINE_CLASS + 1; ++class) { + setup_logical_ids(gt, logical_ids, class); - err = intel_engine_setup(gt, i); - if (err) - goto cleanup; + for (i = 0; i < ARRAY_SIZE(intel_engines); ++i) { + u8 instance = intel_engines[i].instance; + + if (intel_engines[i].class != class || + !HAS_ENGINE(gt, i)) + continue; - mask |= BIT(i); + err = intel_engine_setup(gt, i, + logical_ids[instance]); + if (err) + goto cleanup; + + mask |= BIT(i); + } } /* diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index eec57e57403f..1a458d527988 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -272,6 +272,7 @@ struct intel_engine_cs { unsigned int guc_id; intel_engine_mask_t mask; + intel_engine_mask_t logical_mask; u8 class; u8 instance; diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 2aa5cc100956..2b9a2a61495f 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -3823,6 +3823,7 @@ execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count) ve->siblings[ve->num_siblings++] = sibling; ve->base.mask |= sibling->mask; + ve->base.logical_mask |= sibling->logical_mask; /* * All physical engines must be compatible for their emission diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c index c56302dedb32..676c394734ce 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ads.c @@ -176,7 +176,7 @@ static void guc_mapping_table_init(struct intel_gt *gt, for_each_engine(engine, gt, id) { u8 guc_class = engine_class_to_guc_class(engine->class); - system_info->mapping_table[guc_class][engine->instance] = + system_info->mapping_table[guc_class][ilog2(engine->logical_mask)] = engine->instance; } } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 65f59497e7ef..b159297b9334 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1781,23 +1781,6 @@ static int deregister_context(struct intel_context *ce, u32 guc_id, bool loop) return __guc_action_deregister_context(guc, guc_id, loop); } -static intel_engine_mask_t adjust_engine_mask(u8 class, intel_engine_mask_t mask) -{ - switch (class) { - case RENDER_CLASS: - return mask >> RCS0; - case VIDEO_ENHANCEMENT_CLASS: - return mask >> VECS0; - case VIDEO_DECODE_CLASS: - return mask >> VCS0; - case COPY_ENGINE_CLASS: - return mask >> BCS0; - default: - MISSING_CASE(class); - return 0; - } -} - static void guc_context_policy_init(struct intel_engine_cs *engine, struct guc_lrc_desc *desc) { @@ -1938,8 +1921,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) desc = __get_lrc_desc(guc, ce->guc_lrcd_reg_idx); desc->engine_class = engine_class_to_guc_class(engine->class); - desc->engine_submit_mask = adjust_engine_mask(engine->class, - engine->mask); + desc->engine_submit_mask = engine->logical_mask; desc->hw_context_desc = ce->lrc.lrca; ce->guc_prio = map_i915_prio_to_guc_prio(prio); desc->priority = ce->guc_prio; @@ -3946,6 +3928,7 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) } ve->base.mask |= sibling->mask; + ve->base.logical_mask |= sibling->logical_mask; if (n != 0 && ve->base.class != sibling->class) { DRM_DEBUG("invalid mixing of engine class, sibling %d, already %d\n", From patchwork Tue Jul 20 20:57:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD35CC07E9B for ; Tue, 20 Jul 2021 20:40:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 906D560FEA for ; Tue, 20 Jul 2021 20:40:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 906D560FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 58C1F6E550; Tue, 20 Jul 2021 20:40:22 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 314AF6E4E3; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421884" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421884" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906073" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 15/42] drm/i915: Expose logical engine instance to user Date: Tue, 20 Jul 2021 13:57:35 -0700 Message-Id: <20210720205802.39610-16-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Expose logical engine instance to user via query engine info IOCTL. This is required for split-frame workloads as these needs to be placed on engines in a logically contiguous order. The logical mapping can change based on fusing. Rather than having user have knowledge of the fusing we simply just expose the logical mapping with the existing query engine info IOCTL. Cc: Tvrtko Ursulin Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/i915_query.c | 2 ++ include/uapi/drm/i915_drm.h | 8 +++++++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c index e49da36c62fb..8a72923fbdba 100644 --- a/drivers/gpu/drm/i915/i915_query.c +++ b/drivers/gpu/drm/i915/i915_query.c @@ -124,7 +124,9 @@ query_engine_info(struct drm_i915_private *i915, for_each_uabi_engine(engine, i915) { info.engine.engine_class = engine->uabi_class; info.engine.engine_instance = engine->uabi_instance; + info.flags = I915_ENGINE_INFO_HAS_LOGICAL_INSTANCE; info.capabilities = engine->uabi_capabilities; + info.logical_instance = ilog2(engine->logical_mask); if (copy_to_user(info_ptr, &info, sizeof(info))) return -EFAULT; diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 44919d0848a0..b617f98598f8 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -2662,14 +2662,20 @@ struct drm_i915_engine_info { /** @flags: Engine flags. */ __u64 flags; +#define I915_ENGINE_INFO_HAS_LOGICAL_INSTANCE (1 << 0) /** @capabilities: Capabilities of this engine. */ __u64 capabilities; #define I915_VIDEO_CLASS_CAPABILITY_HEVC (1 << 0) #define I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC (1 << 1) + /** @logical_instance: Logical instance of engine */ + __u16 logical_instance; + /** @rsvd1: Reserved fields. */ - __u64 rsvd1[4]; + __u16 rsvd1[3]; + /** @rsvd2: Reserved fields. */ + __u64 rsvd2[2]; }; /** From patchwork Tue Jul 20 20:57:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79405C07E9B for ; Tue, 20 Jul 2021 20:41:05 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 50EEC60FEA for ; Tue, 20 Jul 2021 20:41:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50EEC60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C08EF6E52C; Tue, 20 Jul 2021 20:40:27 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id B665E6E513; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885370" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885370" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906074" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 16/42] drm/i915/guc: Introduce context parent-child relationship Date: Tue, 20 Jul 2021 13:57:36 -0700 Message-Id: <20210720205802.39610-17-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce context parent-child relationship. Once this relationship is created all pinning / unpinning operations are directed to the parent context. The parent context is responsible for pinning all of its' children and itself. This is a precursor to the full GuC multi-lrc implementation but aligns to how GuC mutli-lrc interface is defined - a single H2G is used register / deregister all of the contexts simultaneously. Subsequent patches in the series will implement the pinning / unpinning operations for parent / child contexts. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context.c | 29 +++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_context.h | 18 ++++++++++++ drivers/gpu/drm/i915/gt/intel_context_types.h | 12 ++++++++ 3 files changed, 59 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index b1e3d00fb1f2..76dd038c57d2 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -399,6 +399,8 @@ intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine) spin_lock_init(&ce->guc_state.lock); INIT_LIST_HEAD(&ce->guc_state.fences); + INIT_LIST_HEAD(&ce->guc_child_list); + spin_lock_init(&ce->guc_active.lock); INIT_LIST_HEAD(&ce->guc_active.requests); @@ -414,10 +416,17 @@ intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine) void intel_context_fini(struct intel_context *ce) { + struct intel_context *child, *next; + if (ce->timeline) intel_timeline_put(ce->timeline); i915_vm_put(ce->vm); + /* Need to put the creation ref for the children */ + if (intel_context_is_parent(ce)) + for_each_child_safe(ce, child, next) + intel_context_put(child); + mutex_destroy(&ce->pin_mutex); i915_active_fini(&ce->active); } @@ -544,6 +553,26 @@ struct i915_request *intel_context_find_active_request(struct intel_context *ce) return active; } +void intel_context_bind_parent_child(struct intel_context *parent, + struct intel_context *child) +{ + /* + * Callers responsibility to validate that this function is used + * correctly but we use GEM_BUG_ON here ensure that they do. + */ + GEM_BUG_ON(!intel_engine_uses_guc(parent->engine)); + GEM_BUG_ON(intel_context_is_pinned(parent)); + GEM_BUG_ON(intel_context_is_child(parent)); + GEM_BUG_ON(intel_context_is_pinned(child)); + GEM_BUG_ON(intel_context_is_child(child)); + GEM_BUG_ON(intel_context_is_parent(child)); + + parent->guc_number_children++; + list_add_tail(&child->guc_child_link, + &parent->guc_child_list); + child->parent = parent; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_context.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 876bdb08303c..28378fd9eb99 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -41,6 +41,24 @@ void intel_context_free(struct intel_context *ce); int intel_context_reconfigure_sseu(struct intel_context *ce, const struct intel_sseu sseu); +static inline bool intel_context_is_child(struct intel_context *ce) +{ + return !!ce->parent; +} + +static inline bool intel_context_is_parent(struct intel_context *ce) +{ + return !!ce->guc_number_children; +} + +void intel_context_bind_parent_child(struct intel_context *parent, + struct intel_context *child); + +#define for_each_child(parent, ce)\ + list_for_each_entry(ce, &parent->guc_child_list, guc_child_link) +#define for_each_child_safe(parent, ce, cn)\ + list_for_each_entry_safe(ce, cn, &parent->guc_child_list, guc_child_link) + /** * intel_context_lock_pinned - Stablises the 'pinned' status of the HW context * @ce - the context diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index aabc1b349044..c2e66acaf892 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -202,6 +202,18 @@ struct intel_context { /* GuC context blocked fence */ struct i915_sw_fence guc_blocked; + /* Head of children list or link in parent's children list */ + union { + struct list_head guc_child_list; /* parent */ + struct list_head guc_child_link; /* child */ + }; + + /* Pointer to parent */ + struct intel_context *parent; + + /* Number of children if parent */ + u8 guc_number_children; + /* * GuC priority management */ From patchwork Tue Jul 20 20:57:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64FF7C636C9 for ; Tue, 20 Jul 2021 20:40:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3347A60FEA for ; Tue, 20 Jul 2021 20:40:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3347A60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0D32A6E520; Tue, 20 Jul 2021 20:40:22 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id C06096E514; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909010" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909010" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906075" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 17/42] drm/i915/guc: Implement GuC parent-child context pin / unpin functions Date: Tue, 20 Jul 2021 13:57:37 -0700 Message-Id: <20210720205802.39610-18-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Implement GuC parent-child context pin / unpin functions in which in any contexts in the relationship are pinned all the contexts are pinned. The parent owns most of the pinning / unpinning process and the children direct any pins / unpins to the parent. Patch implements a number of unused functions that will be connected later in the series. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context.c | 188 ++++++++++++++++-- drivers/gpu/drm/i915/gt/intel_context.h | 43 +--- drivers/gpu/drm/i915/gt/intel_context_types.h | 4 +- .../drm/i915/gt/intel_execlists_submission.c | 25 ++- drivers/gpu/drm/i915/gt/intel_lrc.c | 18 +- drivers/gpu/drm/i915/gt/intel_lrc.h | 6 +- .../gpu/drm/i915/gt/intel_ring_submission.c | 5 +- drivers/gpu/drm/i915/gt/mock_engine.c | 4 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 183 +++++++++++++++-- 9 files changed, 368 insertions(+), 108 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 76dd038c57d2..545bedd43509 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -162,8 +162,8 @@ static void __ring_retire(struct intel_ring *ring) intel_ring_unpin(ring); } -static int intel_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww) +static int __intel_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) { int err; @@ -194,7 +194,7 @@ static int intel_context_pre_pin(struct intel_context *ce, return err; } -static void intel_context_post_unpin(struct intel_context *ce) +static void __intel_context_post_unpin(struct intel_context *ce) { if (ce->state) __context_unpin_state(ce->state); @@ -203,13 +203,84 @@ static void intel_context_post_unpin(struct intel_context *ce) __ring_retire(ce->ring); } -int __intel_context_do_pin_ww(struct intel_context *ce, - struct i915_gem_ww_ctx *ww) +static int intel_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) { - bool handoff = false; - void *vaddr; + struct intel_context *child; + int err, i = 0; + + GEM_BUG_ON(intel_context_is_child(ce)); + + for_each_child(ce, child) { + err = __intel_context_pre_pin(child, ww); + if (unlikely(err)) + goto unwind; + ++i; + } + + err = __intel_context_pre_pin(ce, ww); + if (unlikely(err)) + goto unwind; + + return 0; + +unwind: + for_each_child(ce, child) { + if (!i--) + break; + __intel_context_post_unpin(ce); + } + + return err; +} + +static void intel_context_post_unpin(struct intel_context *ce) +{ + struct intel_context *child; + + GEM_BUG_ON(intel_context_is_child(ce)); + + for_each_child(ce, child) + __intel_context_post_unpin(child); + + __intel_context_post_unpin(ce); +} + +static int __do_ww_lock(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + int err = i915_gem_object_lock(ce->timeline->hwsp_ggtt->obj, ww); + if (!err && ce->ring->vma->obj) + err = i915_gem_object_lock(ce->ring->vma->obj, ww); + if (!err && ce->state) + err = i915_gem_object_lock(ce->state->obj, ww); + + return err; +} + +static int do_ww_lock(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + struct intel_context *child; int err = 0; + GEM_BUG_ON(intel_context_is_child(ce)); + + for_each_child(ce, child) { + err = __do_ww_lock(child, ww); + if (unlikely(err)) + return err; + } + + return __do_ww_lock(ce, ww); +} + +static int __intel_context_do_pin_ww(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + bool handoff = false; + int err; + if (unlikely(!test_bit(CONTEXT_ALLOC_BIT, &ce->flags))) { err = intel_context_alloc_state(ce); if (err) @@ -221,14 +292,11 @@ int __intel_context_do_pin_ww(struct intel_context *ce, * refcount for __intel_context_active(), which prevent a lock * inversion of ce->pin_mutex vs dma_resv_lock(). */ + err = do_ww_lock(ce, ww); + if (err) + return err; - err = i915_gem_object_lock(ce->timeline->hwsp_ggtt->obj, ww); - if (!err && ce->ring->vma->obj) - err = i915_gem_object_lock(ce->ring->vma->obj, ww); - if (!err && ce->state) - err = i915_gem_object_lock(ce->state->obj, ww); - if (!err) - err = intel_context_pre_pin(ce, ww); + err = intel_context_pre_pin(ce, ww); if (err) return err; @@ -236,7 +304,7 @@ int __intel_context_do_pin_ww(struct intel_context *ce, if (err) goto err_ctx_unpin; - err = ce->ops->pre_pin(ce, ww, &vaddr); + err = ce->ops->pre_pin(ce, ww); if (err) goto err_release; @@ -254,7 +322,7 @@ int __intel_context_do_pin_ww(struct intel_context *ce, if (unlikely(err)) goto err_unlock; - err = ce->ops->pin(ce, vaddr); + err = ce->ops->pin(ce); if (err) { intel_context_active_release(ce); goto err_unlock; @@ -294,7 +362,7 @@ int __intel_context_do_pin_ww(struct intel_context *ce, return err; } -int __intel_context_do_pin(struct intel_context *ce) +static int __intel_context_do_pin(struct intel_context *ce) { struct i915_gem_ww_ctx ww; int err; @@ -341,7 +409,7 @@ static void __intel_context_retire(struct i915_active *active) intel_context_get_avg_runtime_ns(ce)); set_bit(CONTEXT_VALID_BIT, &ce->flags); - intel_context_post_unpin(ce); + __intel_context_post_unpin(ce); intel_context_put(ce); } @@ -573,6 +641,90 @@ void intel_context_bind_parent_child(struct intel_context *parent, child->parent = parent; } +static inline int ____intel_context_pin(struct intel_context *ce) +{ + if (likely(intel_context_pin_if_active(ce))) + return 0; + + return __intel_context_do_pin(ce); +} + +static inline int __intel_context_pin_ww(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + if (likely(intel_context_pin_if_active(ce))) + return 0; + + return __intel_context_do_pin_ww(ce, ww); +} + +static inline void __intel_context_unpin(struct intel_context *ce) +{ + if (!ce->ops->sched_disable) { + __intel_context_do_unpin(ce, 1); + } else { + /* + * Move ownership of this pin to the scheduling disable which is + * an async operation. When that operation completes the above + * intel_context_sched_disable_unpin is called potentially + * unpinning the context. + */ + while (!atomic_add_unless(&ce->pin_count, -1, 1)) { + if (atomic_cmpxchg(&ce->pin_count, 1, 2) == 1) { + ce->ops->sched_disable(ce); + break; + } + } + } +} + +/* + * FIXME: This is ugly, these branches are only needed for parallel contexts in + * GuC submission. Basically the idea is if any of the contexts, that are + * configured for parallel submission, are pinned all the contexts need to be + * pinned in order to register these contexts with the GuC. We are adding the + * layer here while it should probably be pushed to the backend via a vfunc. But + * since we already have ce->pin + a layer atop it is confusing. Definitely + * needs a bit of rework how to properly layer / structure this code path. What + * is in place works but is not ideal. + */ +int intel_context_pin(struct intel_context *ce) +{ + if (intel_context_is_child(ce)) { + if (!atomic_fetch_add(1, &ce->pin_count)) { + return ____intel_context_pin(ce->parent); + } else { + return 0; + } + } else { + return ____intel_context_pin(ce); + } +} + +int intel_context_pin_ww(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + if (intel_context_is_child(ce)) { + if (!atomic_fetch_add(1, &ce->pin_count)) { + return __intel_context_pin_ww(ce->parent, ww); + } else { + return 0; + } + } else { + return __intel_context_pin_ww(ce, ww); + } +} + +void intel_context_unpin(struct intel_context *ce) +{ + if (intel_context_is_child(ce)) { + if (atomic_fetch_add(-1, &ce->pin_count) == 1) + __intel_context_unpin(ce->parent); + } else { + __intel_context_unpin(ce); + } +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_context.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 28378fd9eb99..e78dfe9d2344 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -107,31 +107,15 @@ static inline void intel_context_unlock_pinned(struct intel_context *ce) mutex_unlock(&ce->pin_mutex); } -int __intel_context_do_pin(struct intel_context *ce); -int __intel_context_do_pin_ww(struct intel_context *ce, - struct i915_gem_ww_ctx *ww); - static inline bool intel_context_pin_if_active(struct intel_context *ce) { return atomic_inc_not_zero(&ce->pin_count); } -static inline int intel_context_pin(struct intel_context *ce) -{ - if (likely(intel_context_pin_if_active(ce))) - return 0; - - return __intel_context_do_pin(ce); -} - -static inline int intel_context_pin_ww(struct intel_context *ce, - struct i915_gem_ww_ctx *ww) -{ - if (likely(intel_context_pin_if_active(ce))) - return 0; +int intel_context_pin(struct intel_context *ce); - return __intel_context_do_pin_ww(ce, ww); -} +int intel_context_pin_ww(struct intel_context *ce, + struct i915_gem_ww_ctx *ww); static inline void __intel_context_pin(struct intel_context *ce) { @@ -143,28 +127,11 @@ void __intel_context_do_unpin(struct intel_context *ce, int sub); static inline void intel_context_sched_disable_unpin(struct intel_context *ce) { + GEM_BUG_ON(intel_context_is_child(ce)); __intel_context_do_unpin(ce, 2); } -static inline void intel_context_unpin(struct intel_context *ce) -{ - if (!ce->ops->sched_disable) { - __intel_context_do_unpin(ce, 1); - } else { - /* - * Move ownership of this pin to the scheduling disable which is - * an async operation. When that operation completes the above - * intel_context_sched_disable_unpin is called potentially - * unpinning the context. - */ - while (!atomic_add_unless(&ce->pin_count, -1, 1)) { - if (atomic_cmpxchg(&ce->pin_count, 1, 2) == 1) { - ce->ops->sched_disable(ce); - break; - } - } - } -} +void intel_context_unpin(struct intel_context *ce); void intel_context_enter_engine(struct intel_context *ce); void intel_context_exit_engine(struct intel_context *ce); diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index c2e66acaf892..12831e2870dd 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -39,8 +39,8 @@ struct intel_context_ops { void (*ban)(struct intel_context *ce, struct i915_request *rq); - int (*pre_pin)(struct intel_context *ce, struct i915_gem_ww_ctx *ww, void **vaddr); - int (*pin)(struct intel_context *ce, void *vaddr); + int (*pre_pin)(struct intel_context *ce, struct i915_gem_ww_ctx *ww); + int (*pin)(struct intel_context *ce); void (*unpin)(struct intel_context *ce); void (*post_unpin)(struct intel_context *ce); diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 2b9a2a61495f..f278744fd4d0 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -2503,16 +2503,17 @@ static void execlists_submit_request(struct i915_request *request) static int __execlists_context_pre_pin(struct intel_context *ce, struct intel_engine_cs *engine, - struct i915_gem_ww_ctx *ww, void **vaddr) + struct i915_gem_ww_ctx *ww) { int err; - err = lrc_pre_pin(ce, engine, ww, vaddr); + err = lrc_pre_pin(ce, engine, ww); if (err) return err; if (!__test_and_set_bit(CONTEXT_INIT_BIT, &ce->flags)) { - lrc_init_state(ce, engine, *vaddr); + lrc_init_state(ce, engine, ce->lrc_reg_state - + LRC_STATE_OFFSET / sizeof(*ce->lrc_reg_state)); __i915_gem_object_flush_map(ce->state->obj, 0, engine->context_size); } @@ -2521,15 +2522,14 @@ __execlists_context_pre_pin(struct intel_context *ce, } static int execlists_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, - void **vaddr) + struct i915_gem_ww_ctx *ww) { - return __execlists_context_pre_pin(ce, ce->engine, ww, vaddr); + return __execlists_context_pre_pin(ce, ce->engine, ww); } -static int execlists_context_pin(struct intel_context *ce, void *vaddr) +static int execlists_context_pin(struct intel_context *ce) { - return lrc_pin(ce, ce->engine, vaddr); + return lrc_pin(ce, ce->engine); } static int execlists_context_alloc(struct intel_context *ce) @@ -3514,20 +3514,19 @@ static int virtual_context_alloc(struct intel_context *ce) } static int virtual_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, - void **vaddr) + struct i915_gem_ww_ctx *ww) { struct virtual_engine *ve = container_of(ce, typeof(*ve), context); /* Note: we must use a real engine class for setting up reg state */ - return __execlists_context_pre_pin(ce, ve->siblings[0], ww, vaddr); + return __execlists_context_pre_pin(ce, ve->siblings[0], ww); } -static int virtual_context_pin(struct intel_context *ce, void *vaddr) +static int virtual_context_pin(struct intel_context *ce) { struct virtual_engine *ve = container_of(ce, typeof(*ve), context); - return lrc_pin(ce, ve->siblings[0], vaddr); + return lrc_pin(ce, ve->siblings[0]); } static void virtual_context_enter(struct intel_context *ce) diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 8ada1afe3d22..39dce3f24707 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -896,30 +896,30 @@ void lrc_reset(struct intel_context *ce) int lrc_pre_pin(struct intel_context *ce, struct intel_engine_cs *engine, - struct i915_gem_ww_ctx *ww, - void **vaddr) + struct i915_gem_ww_ctx *ww) { + void *vaddr; GEM_BUG_ON(!ce->state); GEM_BUG_ON(!i915_vma_is_pinned(ce->state)); - *vaddr = i915_gem_object_pin_map(ce->state->obj, + vaddr = i915_gem_object_pin_map(ce->state->obj, i915_coherent_map_type(ce->engine->i915, ce->state->obj, false) | I915_MAP_OVERRIDE); - return PTR_ERR_OR_ZERO(*vaddr); + ce->lrc_reg_state = vaddr + LRC_STATE_OFFSET; + + return PTR_ERR_OR_ZERO(vaddr); } int lrc_pin(struct intel_context *ce, - struct intel_engine_cs *engine, - void *vaddr) + struct intel_engine_cs *engine) { - ce->lrc_reg_state = vaddr + LRC_STATE_OFFSET; - if (!__test_and_set_bit(CONTEXT_INIT_BIT, &ce->flags)) - lrc_init_state(ce, engine, vaddr); + lrc_init_state(ce, engine, + (void *)ce->lrc_reg_state - LRC_STATE_OFFSET); ce->lrc.lrca = lrc_update_regs(ce, engine, ce->ring->tail); return 0; diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.h b/drivers/gpu/drm/i915/gt/intel_lrc.h index 7f697845c4cf..837fcf00270d 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.h +++ b/drivers/gpu/drm/i915/gt/intel_lrc.h @@ -38,12 +38,10 @@ void lrc_destroy(struct kref *kref); int lrc_pre_pin(struct intel_context *ce, struct intel_engine_cs *engine, - struct i915_gem_ww_ctx *ww, - void **vaddr); + struct i915_gem_ww_ctx *ww); int lrc_pin(struct intel_context *ce, - struct intel_engine_cs *engine, - void *vaddr); + struct intel_engine_cs *engine); void lrc_unpin(struct intel_context *ce); void lrc_post_unpin(struct intel_context *ce); diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 05bb9f449df1..21168ff43b11 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -471,8 +471,7 @@ static int ring_context_init_default_state(struct intel_context *ce, } static int ring_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, - void **unused) + struct i915_gem_ww_ctx *ww) { struct i915_address_space *vm; int err = 0; @@ -575,7 +574,7 @@ static int ring_context_alloc(struct intel_context *ce) return 0; } -static int ring_context_pin(struct intel_context *ce, void *unused) +static int ring_context_pin(struct intel_context *ce) { return 0; } diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c index 2c1af030310c..826b5d7a4573 100644 --- a/drivers/gpu/drm/i915/gt/mock_engine.c +++ b/drivers/gpu/drm/i915/gt/mock_engine.c @@ -167,12 +167,12 @@ static int mock_context_alloc(struct intel_context *ce) } static int mock_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, void **unused) + struct i915_gem_ww_ctx *ww) { return 0; } -static int mock_context_pin(struct intel_context *ce, void *unused) +static int mock_context_pin(struct intel_context *ce) { return 0; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index b159297b9334..f6cf47b1ff7c 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1891,6 +1891,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) GEM_BUG_ON(!engine->mask); GEM_BUG_ON(context_guc_id_invalid(ce)); + GEM_BUG_ON(intel_context_is_child(ce)); /* * Ensure LRC + CT vmas are is same region as write barrier is done @@ -1993,15 +1994,13 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) static int __guc_context_pre_pin(struct intel_context *ce, struct intel_engine_cs *engine, - struct i915_gem_ww_ctx *ww, - void **vaddr) + struct i915_gem_ww_ctx *ww) { - return lrc_pre_pin(ce, engine, ww, vaddr); + return lrc_pre_pin(ce, engine, ww); } static int __guc_context_pin(struct intel_context *ce, - struct intel_engine_cs *engine, - void *vaddr) + struct intel_engine_cs *engine) { if (i915_ggtt_offset(ce->state) != (ce->lrc.lrca & CTX_GTT_ADDRESS_MASK)) @@ -2012,20 +2011,33 @@ static int __guc_context_pin(struct intel_context *ce, * explaination of why. */ - return lrc_pin(ce, engine, vaddr); + return lrc_pin(ce, engine); +} + +static void __guc_context_unpin(struct intel_context *ce) +{ + lrc_unpin(ce); +} + +static void __guc_context_post_unpin(struct intel_context *ce) +{ + lrc_post_unpin(ce); } static int guc_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, - void **vaddr) + struct i915_gem_ww_ctx *ww) { - return __guc_context_pre_pin(ce, ce->engine, ww, vaddr); + return __guc_context_pre_pin(ce, ce->engine, ww); } -static int guc_context_pin(struct intel_context *ce, void *vaddr) +static int guc_context_pin(struct intel_context *ce) { - int ret = __guc_context_pin(ce, ce->engine, vaddr); + int ret; + GEM_BUG_ON(intel_context_is_parent(ce) || + intel_context_is_child(ce)); + + ret = __guc_context_pin(ce, ce->engine); if (likely(!ret && !intel_context_is_barrier(ce))) intel_engine_pm_get(ce->engine); @@ -2039,7 +2051,7 @@ static void guc_context_unpin(struct intel_context *ce) GEM_BUG_ON(context_enabled(ce)); unpin_guc_id(guc, ce, true); - lrc_unpin(ce); + __guc_context_unpin(ce); if (likely(!intel_context_is_barrier(ce))) intel_engine_pm_put(ce->engine); @@ -2047,7 +2059,141 @@ static void guc_context_unpin(struct intel_context *ce) static void guc_context_post_unpin(struct intel_context *ce) { - lrc_post_unpin(ce); + __guc_context_post_unpin(ce); +} + +/* Future patches will use this function */ +__attribute__ ((unused)) +static int guc_parent_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + struct intel_context *child; + int err, i = 0, j = 0; + + for_each_child(ce, child) { + err = i915_active_acquire(&child->active); + if (unlikely(err)) + goto unwind_active; + ++i; + } + + for_each_child(ce, child) { + err = __guc_context_pre_pin(child, child->engine, ww); + if (unlikely(err)) + goto unwind_pre_pin; + ++j; + } + + err = __guc_context_pre_pin(ce, ce->engine, ww); + if (unlikely(err)) + goto unwind_pre_pin; + + return 0; + +unwind_pre_pin: + for_each_child(ce, child) { + if (!j--) + break; + __guc_context_post_unpin(child); + } + +unwind_active: + for_each_child(ce, child) { + if (!i--) + break; + i915_active_release(&child->active); + } + + return err; +} + +/* Future patches will use this function */ +__attribute__ ((unused)) +static void guc_parent_context_post_unpin(struct intel_context *ce) +{ + struct intel_context *child; + + for_each_child(ce, child) + __guc_context_post_unpin(child); + __guc_context_post_unpin(ce); + + for_each_child(ce, child) { + intel_context_get(child); + i915_active_release(&child->active); + intel_context_put(child); + } +} + +/* Future patches will use this function */ +__attribute__ ((unused)) +static int guc_parent_context_pin(struct intel_context *ce) +{ + int ret, i = 0, j = 0; + struct intel_context *child; + struct intel_engine_cs *engine; + intel_engine_mask_t tmp; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + for_each_child(ce, child) { + ret = __guc_context_pin(child, child->engine); + if (unlikely(ret)) + goto unwind_pin; + ++i; + } + ret = __guc_context_pin(ce, ce->engine); + if (unlikely(ret)) + goto unwind_pin; + + for_each_child(ce, child) + if (test_bit(CONTEXT_LRCA_DIRTY, &child->flags)) { + set_bit(CONTEXT_LRCA_DIRTY, &ce->flags); + break; + } + + for_each_engine_masked(engine, ce->engine->gt, + ce->engine->mask, tmp) + intel_engine_pm_get(engine); + for_each_child(ce, child) + for_each_engine_masked(engine, child->engine->gt, + child->engine->mask, tmp) + intel_engine_pm_get(engine); + + return 0; + +unwind_pin: + for_each_child(ce, child) { + if (++j > i) + break; + __guc_context_unpin(child); + } + + return ret; +} + +/* Future patches will use this function */ +__attribute__ ((unused)) +static void guc_parent_context_unpin(struct intel_context *ce) +{ + struct intel_context *child; + struct intel_engine_cs *engine; + intel_engine_mask_t tmp; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + GEM_BUG_ON(context_enabled(ce)); + + unpin_guc_id(ce_to_guc(ce), ce, true); + for_each_child(ce, child) + __guc_context_unpin(child); + __guc_context_unpin(ce); + + for_each_engine_masked(engine, ce->engine->gt, + ce->engine->mask, tmp) + intel_engine_pm_put(engine); + for_each_child(ce, child) + for_each_engine_masked(engine, child->engine->gt, + child->engine->mask, tmp) + intel_engine_pm_put(engine); } static void __guc_context_sched_enable(struct intel_guc *guc, @@ -2962,18 +3108,17 @@ static int guc_request_alloc(struct i915_request *rq) } static int guc_virtual_context_pre_pin(struct intel_context *ce, - struct i915_gem_ww_ctx *ww, - void **vaddr) + struct i915_gem_ww_ctx *ww) { struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0); - return __guc_context_pre_pin(ce, engine, ww, vaddr); + return __guc_context_pre_pin(ce, engine, ww); } -static int guc_virtual_context_pin(struct intel_context *ce, void *vaddr) +static int guc_virtual_context_pin(struct intel_context *ce) { struct intel_engine_cs *engine = guc_virtual_get_sibling(ce->engine, 0); - int ret = __guc_context_pin(ce, engine, vaddr); + int ret = __guc_context_pin(ce, engine); intel_engine_mask_t tmp, mask = ce->engine->mask; if (likely(!ret)) @@ -2993,7 +3138,7 @@ static void guc_virtual_context_unpin(struct intel_context *ce) GEM_BUG_ON(intel_context_is_barrier(ce)); unpin_guc_id(guc, ce, true); - lrc_unpin(ce); + __guc_context_unpin(ce); for_each_engine_masked(engine, ce->engine->gt, mask, tmp) intel_engine_pm_put(engine); From patchwork Tue Jul 20 20:57:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB00FC636C8 for ; Tue, 20 Jul 2021 20:40:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BDB3760FEA for ; Tue, 20 Jul 2021 20:40:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BDB3760FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A4BE96E519; Tue, 20 Jul 2021 20:40:23 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id A898F6E4F1; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421885" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421885" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906076" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:15 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 18/42] drm/i915/guc: Add multi-lrc context registration Date: Tue, 20 Jul 2021 13:57:38 -0700 Message-Id: <20210720205802.39610-19-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add multi-lrc context registration H2G. In addition a workqueue and process descriptor are setup during multi-lrc context registration as these data structures are needed for multi-lrc submission. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context_types.h | 6 + drivers/gpu/drm/i915/gt/intel_lrc.c | 5 + .../gpu/drm/i915/gt/uc/abi/guc_actions_abi.h | 1 + drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 2 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 118 +++++++++++++++++- 5 files changed, 130 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 12831e2870dd..615b8f1d723d 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -211,9 +211,15 @@ struct intel_context { /* Pointer to parent */ struct intel_context *parent; + /* GuC workqueue head / tail - only applies to parent */ + u16 guc_wqi_tail; + u16 guc_wqi_head; + /* Number of children if parent */ u8 guc_number_children; + u8 parent_page; /* if set, page num reserved for parent context */ + /* * GuC priority management */ diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index 39dce3f24707..32882f195022 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -810,6 +810,11 @@ __lrc_alloc_state(struct intel_context *ce, struct intel_engine_cs *engine) context_size += PAGE_SIZE; } + if (intel_context_is_parent(ce)) { + ce->parent_page = context_size / PAGE_SIZE; + context_size += PAGE_SIZE; + } + obj = i915_gem_object_create_lmem(engine->i915, context_size, 0); if (IS_ERR(obj)) obj = i915_gem_object_create_shmem(engine->i915, context_size); diff --git a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h index 596cf4b818e5..5c3dbf83971f 100644 --- a/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h +++ b/drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h @@ -142,6 +142,7 @@ enum intel_guc_action { INTEL_GUC_ACTION_REGISTER_COMMAND_TRANSPORT_BUFFER = 0x4505, INTEL_GUC_ACTION_DEREGISTER_COMMAND_TRANSPORT_BUFFER = 0x4506, INTEL_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600, + INTEL_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC = 0x4601, INTEL_GUC_ACTION_RESET_CLIENT = 0x5B01, INTEL_GUC_ACTION_LIMIT }; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h index 94bb1ca6f889..86fb90a4bcfa 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h @@ -51,7 +51,7 @@ #define GUC_DOORBELL_INVALID 256 -#define GUC_WQ_SIZE (PAGE_SIZE * 2) +#define GUC_WQ_SIZE (PAGE_SIZE / 2) /* Work queue item header definitions */ #define WQ_STATUS_ACTIVE 1 diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index f6cf47b1ff7c..0045895e0fa0 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -436,6 +436,39 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb) return rb_entry(rb, struct i915_priolist, node); } +/* + * When using multi-lrc submission an extra page in the context state is + * reserved for the process descriptor and work queue. + * + * The layout of this page is below: + * 0 guc_process_desc + * ... unused + * PAGE_SIZE / 2 work queue start + * ... work queue + * PAGE_SIZE - 1 work queue end + */ +#define WQ_OFFSET (PAGE_SIZE / 2) +static inline u32 __get_process_desc_offset(struct intel_context *ce) +{ + GEM_BUG_ON(!ce->parent_page); + + return ce->parent_page * PAGE_SIZE; +} + +static inline u32 __get_wq_offset(struct intel_context *ce) +{ + return __get_process_desc_offset(ce) + WQ_OFFSET; +} + +static inline struct guc_process_desc * +__get_process_desc(struct intel_context *ce) +{ + return (struct guc_process_desc *) + (ce->lrc_reg_state + + ((__get_process_desc_offset(ce) - + LRC_STATE_OFFSET) / sizeof(u32))); +} + static u32 __get_lrc_desc_offset(struct intel_guc *guc, int index) { GEM_BUG_ON(index >= guc->lrcd_reg.max_idx); @@ -1731,6 +1764,28 @@ static void unpin_guc_id(struct intel_guc *guc, spin_unlock_irqrestore(&guc->contexts_lock, flags); } +static int __guc_action_register_multi_lrc(struct intel_guc *guc, + struct intel_context *ce, + u32 guc_id, + bool loop) +{ + struct intel_context *child; + u32 action[3 + MAX_ENGINE_INSTANCE]; + int len = 0; + + GEM_BUG_ON(ce->guc_number_children > MAX_ENGINE_INSTANCE); + + action[len++] = INTEL_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC; + action[len++] = guc_id; + action[len++] = ce->guc_number_children + 1; + action[len++] = __get_lrc_desc_offset(guc, ce->guc_lrcd_reg_idx); + for_each_child(ce, child) + action[len++] = + __get_lrc_desc_offset(guc, child->guc_lrcd_reg_idx); + + return guc_submission_send_busy_loop(guc, action, len, 0, loop); +} + static int __guc_action_register_context(struct intel_guc *guc, struct intel_context *ce, u32 guc_id, @@ -1751,9 +1806,15 @@ static int register_context(struct intel_context *ce, bool loop) struct intel_guc *guc = ce_to_guc(ce); int ret; + GEM_BUG_ON(intel_context_is_child(ce)); trace_intel_context_register(ce); - ret = __guc_action_register_context(guc, ce, ce->guc_id, loop); + if (intel_context_is_parent(ce)) + ret = __guc_action_register_multi_lrc(guc, ce, ce->guc_id, + loop); + else + ret = __guc_action_register_context(guc, ce, ce->guc_id, loop); + set_context_registered(ce); return ret; } @@ -1776,6 +1837,7 @@ static int deregister_context(struct intel_context *ce, u32 guc_id, bool loop) { struct intel_guc *guc = ce_to_guc(ce); + GEM_BUG_ON(intel_context_is_child(ce)); trace_intel_context_deregister(ce); return __guc_action_deregister_context(guc, guc_id, loop); @@ -1846,7 +1908,11 @@ static void __free_lrcd_reg_idx(struct intel_guc *guc, struct intel_context *ce) static void free_lrcd_reg_idx(struct intel_guc *guc, struct intel_context *ce) { + struct intel_context *child; + __free_lrcd_reg_idx(guc, ce); + for_each_child(ce, child) + __free_lrcd_reg_idx(guc, child); } static int guc_lrcd_reg_init(struct intel_guc *guc) @@ -1887,6 +1953,7 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) int prio = I915_CONTEXT_DEFAULT_PRIORITY; bool context_registered; intel_wakeref_t wakeref; + struct intel_context *child; int ret = 0; GEM_BUG_ON(!engine->mask); @@ -1907,6 +1974,13 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) return ret; ce->guc_lrcd_reg_idx = ret; } + for_each_child(ce, child) + if (likely(!child->guc_lrcd_reg_idx)) { + ret = alloc_lrcd_reg_idx(guc, !loop); + if (unlikely(ret < 0)) + return ret; + child->guc_lrcd_reg_idx = ret; + } context_registered = lrc_desc_registered(guc, desc_idx); @@ -1930,6 +2004,42 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) guc_context_policy_init(engine, desc); init_sched_state(ce); + /* + * Context is a parent, we need to register a process descriptor + * describing a work queue and register all child contexts. + */ + if (intel_context_is_parent(ce)) { + struct guc_process_desc *pdesc; + + ce->guc_wqi_tail = 0; + ce->guc_wqi_head = 0; + + desc->process_desc = i915_ggtt_offset(ce->state) + + __get_process_desc_offset(ce); + desc->wq_addr = i915_ggtt_offset(ce->state) + + __get_wq_offset(ce); + desc->wq_size = GUC_WQ_SIZE; + + pdesc = __get_process_desc(ce); + memset(pdesc, 0, sizeof(*(pdesc))); + pdesc->stage_id = ce->guc_id; + pdesc->wq_base_addr = desc->wq_addr; + pdesc->wq_size_bytes = desc->wq_size; + pdesc->priority = GUC_CLIENT_PRIORITY_KMD_NORMAL; + pdesc->wq_status = WQ_STATUS_ACTIVE; + + for_each_child(ce, child) { + desc = __get_lrc_desc(guc, child->guc_lrcd_reg_idx); + + desc->engine_class = + engine_class_to_guc_class(engine->class); + desc->hw_context_desc = child->lrc.lrca; + desc->priority = GUC_CLIENT_PRIORITY_KMD_NORMAL; + desc->context_flags = CONTEXT_REGISTRATION_FLAG_KMD; + guc_context_policy_init(engine, desc); + } + } + /* * The context_lookup xarray is used to determine if the hardware * context is currently registered. There are two cases in which it @@ -3621,6 +3731,12 @@ g2h_context_lookup(struct intel_guc *guc, u32 desc_idx) return NULL; } + if (unlikely(intel_context_is_child(ce))) { + drm_err(&guc_to_gt(guc)->i915->drm, + "Context is child, desc_idx %u", desc_idx); + return NULL; + } + return ce; } From patchwork Tue Jul 20 20:57:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BE2DC07E95 for ; Tue, 20 Jul 2021 20:41:49 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0525960FEA for ; Tue, 20 Jul 2021 20:41:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0525960FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 04ABD6E5C5; Tue, 20 Jul 2021 20:40:58 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id CB5626E516; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885372" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885372" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906077" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 19/42] drm/i915/guc: Ensure GuC schedule operations do not operate on child contexts Date: Tue, 20 Jul 2021 13:57:39 -0700 Message-Id: <20210720205802.39610-20-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" In GuC parent-child contexts the parent context controls the scheduling, ensure only the parent does the scheduling operations. Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 48 +++++++++++++++---- 1 file changed, 38 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 0045895e0fa0..f60a46704ac5 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -402,6 +402,18 @@ static inline void clr_context_banned(struct intel_context *ce) ce->guc_state.sched_state &= ~SCHED_STATE_BANNED; } +static inline struct intel_context * +to_parent(struct intel_context *ce) +{ + return intel_context_is_child(ce) ? ce->parent : ce; +} + +static inline struct intel_context * +request_to_scheduling_context(struct i915_request *rq) +{ + return to_parent(rq->context); +} + static inline bool context_guc_id_invalid(struct intel_context *ce) { return (ce->guc_id == GUC_INVALID_LRC_ID); @@ -2341,6 +2353,7 @@ static void __guc_context_sched_disable(struct intel_guc *guc, GEM_BUG_ON(guc_id == GUC_INVALID_LRC_ID); #endif + GEM_BUG_ON(intel_context_is_child(ce)); trace_intel_context_sched_disable(ce); guc_submission_send_busy_loop(guc, action, ARRAY_SIZE(action), @@ -2546,6 +2559,8 @@ static void guc_context_sched_disable(struct intel_context *ce) u16 guc_id; bool enabled; + GEM_BUG_ON(intel_context_is_child(ce)); + if (submission_disabled(guc) || context_guc_id_invalid(ce) || !lrc_desc_registered(guc, ce->guc_id)) { clr_context_enabled(ce); @@ -2941,6 +2956,8 @@ static void guc_signal_context_fence(struct intel_context *ce) { unsigned long flags; + GEM_BUG_ON(intel_context_is_child(ce)); + spin_lock_irqsave(&ce->guc_state.lock, flags); clr_context_wait_for_deregister_to_register(ce); __guc_signal_context_fence(ce); @@ -3026,10 +3043,21 @@ static bool context_needs_lrc_desc_pin(struct intel_context *ce, bool new_guc_id !submission_disabled(ce_to_guc(ce)); } +static void clear_lrca_dirty(struct intel_context *ce) +{ + struct intel_context *child; + + GEM_BUG_ON(intel_context_is_child(ce)); + + clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); + for_each_child(ce, child) + clear_bit(CONTEXT_LRCA_DIRTY, &child->flags); +} + static int tasklet_pin_guc_id(struct guc_submit_engine *gse, struct i915_request *rq) { - struct intel_context *ce = rq->context; + struct intel_context *ce = request_to_scheduling_context(rq); int ret = 0; lockdep_assert_held(&gse->sched_engine.lock); @@ -3061,7 +3089,7 @@ static int tasklet_pin_guc_id(struct guc_submit_engine *gse, gse->submission_stall_reason = STALL_SCHED_DISABLE; } - clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); + clear_lrca_dirty(ce); out: gse->total_num_rq_with_no_guc_id -= ce->guc_num_rq_submit_no_id; GEM_BUG_ON(gse->total_num_rq_with_no_guc_id < 0); @@ -3092,7 +3120,7 @@ static int tasklet_pin_guc_id(struct guc_submit_engine *gse, static int guc_request_alloc(struct i915_request *rq) { - struct intel_context *ce = rq->context; + struct intel_context *ce = request_to_scheduling_context(rq); struct intel_guc *guc = ce_to_guc(ce); struct guc_submit_engine *gse = ce_to_gse(ce); unsigned long flags; @@ -3143,11 +3171,12 @@ static int guc_request_alloc(struct i915_request *rq) * persistent until the generated request is retired. Thus, sealing these * race conditions. * - * There is no need for a lock here as the timeline mutex ensures at - * most one context can be executing this code path at once. The - * guc_id_ref is incremented once for every request in flight and - * decremented on each retire. When it is zero, a lock around the - * increment (in pin_guc_id) is needed to seal a race with unpin_guc_id. + * There is no need for a lock here as the timeline mutex (or + * parallel_submit mutex in the case of multi-lrc) ensures at most one + * context can be executing this code path at once. The guc_id_ref is + * incremented once for every request in flight and decremented on each + * retire. When it is zero, a lock around the increment (in pin_guc_id) + * is needed to seal a race with unpin_guc_id. */ if (atomic_add_unless(&ce->guc_id_ref, 1, 0)) goto out; @@ -3185,8 +3214,7 @@ static int guc_request_alloc(struct i915_request *rq) } } - clear_bit(CONTEXT_LRCA_DIRTY, &ce->flags); - + clear_lrca_dirty(ce); out: incr_num_rq_not_ready(ce); From patchwork Tue Jul 20 20:57:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E487BC636C9 for ; Tue, 20 Jul 2021 20:41:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AEF3D60FF3 for ; Tue, 20 Jul 2021 20:41:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEF3D60FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 221F16E59B; Tue, 20 Jul 2021 20:40:37 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3823B6E523; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909011" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909011" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906079" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 20/42] drm/i915/guc: Assign contexts in parent-child relationship consecutive guc_ids Date: Tue, 20 Jul 2021 13:57:40 -0700 Message-Id: <20210720205802.39610-21-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Assign contexts in parent-child relationship consecutive guc_ids. This is accomplished by partitioning guc_id space between ones that need to be consecutive (1/16 available guc_ids) and ones that do not (15/16 of available guc_ids). The consecutive search is implemented via the bitmap API. This is a precursor to the full GuC multi-lrc implementation but aligns to how GuC mutli-lrc interface is defined - guc_ids must be consecutive when using the GuC multi-lrc interface. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context.h | 6 + drivers/gpu/drm/i915/gt/intel_reset.c | 3 +- drivers/gpu/drm/i915/gt/uc/intel_guc.h | 7 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 222 ++++++++++++------ .../i915/gt/uc/intel_guc_submission_types.h | 10 + 5 files changed, 178 insertions(+), 70 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index e78dfe9d2344..e18f16de1158 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -51,6 +51,12 @@ static inline bool intel_context_is_parent(struct intel_context *ce) return !!ce->guc_number_children; } +static inline struct intel_context * +intel_context_to_parent(struct intel_context *ce) +{ + return intel_context_is_child(ce) ? ce->parent : ce; +} + void intel_context_bind_parent_child(struct intel_context *parent, struct intel_context *child); diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c index 880ca753fdd8..ebc978679b19 100644 --- a/drivers/gpu/drm/i915/gt/intel_reset.c +++ b/drivers/gpu/drm/i915/gt/intel_reset.c @@ -843,6 +843,7 @@ static void reset_finish(struct intel_gt *gt, intel_engine_mask_t awake) static void nop_submit_request(struct i915_request *request) { + struct intel_context *ce = intel_context_to_parent(request->context); RQ_TRACE(request, "-EIO\n"); /* @@ -851,7 +852,7 @@ static void nop_submit_request(struct i915_request *request) * this for now. */ if (intel_engine_uses_guc(request->engine)) - intel_guc_decr_num_rq_not_ready(request->context); + intel_guc_decr_num_rq_not_ready(ce); request = i915_request_mark_eio(request); if (request) { diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h index 466cc6227503..f79ae5796fd6 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h @@ -24,6 +24,7 @@ struct __guc_ads_blob; enum { GUC_SUBMIT_ENGINE_SINGLE_LRC, + GUC_SUBMIT_ENGINE_MULTI_LRC, GUC_SUBMIT_ENGINE_MAX }; @@ -59,8 +60,10 @@ struct intel_guc { struct ida guc_ids; u32 num_guc_ids; u32 max_guc_ids; - struct list_head guc_id_list_no_ref; - struct list_head guc_id_list_unpinned; + unsigned long *guc_ids_bitmap; +#define MAX_GUC_ID_ORDER 3 + struct list_head guc_id_list_no_ref[MAX_GUC_ID_ORDER + 1]; + struct list_head guc_id_list_unpinned[MAX_GUC_ID_ORDER + 1]; spinlock_t destroy_lock; struct list_head destroyed_contexts; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index f60a46704ac5..d8be5a41d0ca 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -169,6 +169,15 @@ static void clr_guc_ids_exhausted(struct guc_submit_engine *gse) clear_bit(GSE_STATE_GUC_IDS_EXHAUSTED, &gse->flags); } +/* + * We reserve 1/16 of the guc_ids for multi-lrc as these need to be contiguous + * and a different allocation algorithm is used (bitmap vs. ida). We believe the + * number of multi-lrc contexts in use should be low and 1/16 should be + * sufficient. Minimum of 32 ids for multi-lrc. + */ +#define NUMBER_MULTI_LRC_GUC_ID(guc) \ + ((guc)->num_guc_ids / 16 > 32 ? (guc)->num_guc_ids / 16 : 32) + /* * Below is a set of functions which control the GuC scheduling state which do * not require a lock as all state transitions are mutually exclusive. i.e. It @@ -402,16 +411,10 @@ static inline void clr_context_banned(struct intel_context *ce) ce->guc_state.sched_state &= ~SCHED_STATE_BANNED; } -static inline struct intel_context * -to_parent(struct intel_context *ce) -{ - return intel_context_is_child(ce) ? ce->parent : ce; -} - static inline struct intel_context * request_to_scheduling_context(struct i915_request *rq) { - return to_parent(rq->context); + return intel_context_to_parent(rq->context); } static inline bool context_guc_id_invalid(struct intel_context *ce) @@ -1426,7 +1429,7 @@ static void destroy_worker_func(struct work_struct *w); */ int intel_guc_submission_init(struct intel_guc *guc) { - int ret; + int ret, i; if (guc_submission_initialized(guc)) return 0; @@ -1438,9 +1441,13 @@ int intel_guc_submission_init(struct intel_guc *guc) xa_init_flags(&guc->context_lookup, XA_FLAGS_LOCK_IRQ); spin_lock_init(&guc->contexts_lock); - INIT_LIST_HEAD(&guc->guc_id_list_no_ref); - INIT_LIST_HEAD(&guc->guc_id_list_unpinned); + for (i = 0; i < MAX_GUC_ID_ORDER + 1; ++i) { + INIT_LIST_HEAD(&guc->guc_id_list_no_ref[i]); + INIT_LIST_HEAD(&guc->guc_id_list_unpinned[i]); + } ida_init(&guc->guc_ids); + guc->guc_ids_bitmap = + bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL); spin_lock_init(&guc->destroy_lock); @@ -1466,6 +1473,8 @@ void intel_guc_submission_fini(struct intel_guc *guc) i915_sched_engine_put(sched_engine); } + + bitmap_free(guc->guc_ids_bitmap); } static inline void queue_request(struct i915_sched_engine *sched_engine, @@ -1489,11 +1498,13 @@ static inline void queue_request(struct i915_sched_engine *sched_engine, static bool too_many_guc_ids_not_ready(struct guc_submit_engine *gse, struct intel_context *ce) { - u32 available_guc_ids, guc_ids_consumed; struct intel_guc *guc = gse->sched_engine.private_data; + u32 available_guc_ids = intel_context_is_parent(ce) ? + NUMBER_MULTI_LRC_GUC_ID(guc) : + guc->num_guc_ids - NUMBER_MULTI_LRC_GUC_ID(guc); + u32 guc_ids_consumed = atomic_read(&gse->num_guc_ids_not_ready); - available_guc_ids = guc->num_guc_ids; - guc_ids_consumed = atomic_read(&gse->num_guc_ids_not_ready); + GEM_BUG_ON(intel_context_is_child(ce)); if (TOO_MANY_GUC_IDS_NOT_READY(available_guc_ids, guc_ids_consumed)) { set_and_update_guc_ids_exhausted(gse); @@ -1507,17 +1518,26 @@ static void incr_num_rq_not_ready(struct intel_context *ce) { struct guc_submit_engine *gse = ce_to_gse(ce); + GEM_BUG_ON(intel_context_is_child(ce)); + GEM_BUG_ON(!intel_context_is_parent(ce) && + ce->guc_number_children); + if (!atomic_fetch_add(1, &ce->guc_num_rq_not_ready)) - atomic_inc(&gse->num_guc_ids_not_ready); + atomic_add(ce->guc_number_children + 1, + &gse->num_guc_ids_not_ready); } void intel_guc_decr_num_rq_not_ready(struct intel_context *ce) { struct guc_submit_engine *gse = ce_to_gse(ce); + GEM_BUG_ON(intel_context_is_child(ce)); + if (atomic_fetch_add(-1, &ce->guc_num_rq_not_ready) == 1) { GEM_BUG_ON(!atomic_read(&gse->num_guc_ids_not_ready)); - atomic_dec(&gse->num_guc_ids_not_ready); + + atomic_sub(ce->guc_number_children + 1, + &gse->num_guc_ids_not_ready); } } @@ -1569,20 +1589,42 @@ static void guc_submit_request(struct i915_request *rq) spin_unlock_irqrestore(&sched_engine->lock, flags); - intel_guc_decr_num_rq_not_ready(rq->context); + intel_guc_decr_num_rq_not_ready(request_to_scheduling_context(rq)); } -static int new_guc_id(struct intel_guc *guc) +static int new_guc_id(struct intel_guc *guc, struct intel_context *ce) { - return ida_simple_get(&guc->guc_ids, 0, - guc->num_guc_ids, GFP_KERNEL | - __GFP_RETRY_MAYFAIL | __GFP_NOWARN); + int ret; + + GEM_BUG_ON(intel_context_is_child(ce)); + + if (intel_context_is_parent(ce)) + ret = bitmap_find_free_region(guc->guc_ids_bitmap, + NUMBER_MULTI_LRC_GUC_ID(guc), + order_base_2(ce->guc_number_children + + 1)); + else + ret = ida_simple_get(&guc->guc_ids, + NUMBER_MULTI_LRC_GUC_ID(guc), + guc->num_guc_ids, GFP_KERNEL | + __GFP_RETRY_MAYFAIL | __GFP_NOWARN); + if (unlikely(ret < 0)) + return ret; + + ce->guc_id = ret; + return 0; } static void __release_guc_id(struct intel_guc *guc, struct intel_context *ce) { + GEM_BUG_ON(intel_context_is_child(ce)); if (!context_guc_id_invalid(ce)) { - ida_simple_remove(&guc->guc_ids, ce->guc_id); + if (intel_context_is_parent(ce)) + bitmap_release_region(guc->guc_ids_bitmap, ce->guc_id, + order_base_2(ce->guc_number_children + + 1)); + else + ida_simple_remove(&guc->guc_ids, ce->guc_id); clr_lrc_desc_registered(guc, ce->guc_id); set_context_guc_id_invalid(ce); } @@ -1594,6 +1636,8 @@ static void release_guc_id(struct intel_guc *guc, struct intel_context *ce) { unsigned long flags; + GEM_BUG_ON(intel_context_is_child(ce)); + spin_lock_irqsave(&guc->contexts_lock, flags); __release_guc_id(guc, ce); spin_unlock_irqrestore(&guc->contexts_lock, flags); @@ -1608,54 +1652,91 @@ static void release_guc_id(struct intel_guc *guc, struct intel_context *ce) * schedule disable H2G + a deregister H2G. */ static struct list_head *get_guc_id_list(struct intel_guc *guc, + u8 number_children, bool unpinned) { + GEM_BUG_ON(order_base_2(number_children + 1) > MAX_GUC_ID_ORDER); + if (unpinned) - return &guc->guc_id_list_unpinned; + return &guc->guc_id_list_unpinned[order_base_2(number_children + 1)]; else - return &guc->guc_id_list_no_ref; + return &guc->guc_id_list_no_ref[order_base_2(number_children + 1)]; } -static int steal_guc_id(struct intel_guc *guc, bool unpinned) +static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce, + bool unpinned) { - struct intel_context *ce; - int guc_id; - struct list_head *guc_id_list = get_guc_id_list(guc, unpinned); + struct intel_context *cn; + u8 number_children = ce->guc_number_children; lockdep_assert_held(&guc->contexts_lock); + GEM_BUG_ON(intel_context_is_child(ce)); - if (!list_empty(guc_id_list)) { - ce = list_first_entry(guc_id_list, - struct intel_context, - guc_id_link); + do { + struct list_head *guc_id_list = + get_guc_id_list(guc, number_children, unpinned); - /* Ensure context getting stolen in expected state */ - GEM_BUG_ON(atomic_read(&ce->guc_id_ref)); - GEM_BUG_ON(context_guc_id_invalid(ce)); - GEM_BUG_ON(context_guc_id_stolen(ce)); + if (!list_empty(guc_id_list)) { + u8 cn_o2, ce_o2 = order_base_2(ce->guc_number_children); - list_del_init(&ce->guc_id_link); - guc_id = ce->guc_id; - clr_context_registered(ce); + cn = list_first_entry(guc_id_list, + struct intel_context, + guc_id_link); + cn_o2 = order_base_2(cn->guc_number_children); - /* - * If stealing from the pinned list, defer invalidating - * the guc_id until the retire workqueue processes this - * context. - */ - if (!unpinned) { - GEM_BUG_ON(ce_to_gse(ce)->stalled_context); + /* + * Corner case where a multi-lrc context steals a guc_id + * from another context that has more guc_id that itself. + */ + if (cn_o2 != ce_o2) { + bitmap_release_region(guc->guc_ids_bitmap, + cn->guc_id, + cn_o2); + bitmap_allocate_region(guc->guc_ids_bitmap, + ce->guc_id, + ce_o2); + } - ce_to_gse(ce)->stalled_context = intel_context_get(ce); - set_context_guc_id_stolen(ce); - } else { - set_context_guc_id_invalid(ce); + /* Ensure context getting stolen in expected state */ + GEM_BUG_ON(atomic_read(&cn->guc_id_ref)); + GEM_BUG_ON(context_guc_id_invalid(cn)); + GEM_BUG_ON(context_guc_id_stolen(cn)); + GEM_BUG_ON(ce_to_gse(ce) != ce_to_gse(cn)); + + list_del_init(&cn->guc_id_link); + ce->guc_id = cn->guc_id; + + /* + * If stealing from the pinned list, defer invalidating + * the guc_id until the retire workqueue processes this + * context. + */ + clr_context_registered(cn); + if (!unpinned) { + GEM_BUG_ON(ce_to_gse(cn)->stalled_context); + ce_to_gse(cn)->stalled_context = + intel_context_get(cn); + set_context_guc_id_stolen(cn); + } else { + set_context_guc_id_invalid(cn); + } + + return 0; } - return guc_id; - } else { - return -EAGAIN; - } + /* + * When using multi-lrc we search the guc_id_lists with the + * least amount of guc_ids required first but will consume a + * block larger of guc_ids if necessary. +2 to is sufficient to + * move the search from 2 wide to 4 wide and 4 wide to 8 wide. + */ + if (!number_children || number_children > 3) + break; + else + number_children += 2; + } while (true); + + return -EAGAIN; } enum { /* Return values for pin_guc_id / assign_guc_id */ @@ -1664,17 +1745,18 @@ enum { /* Return values for pin_guc_id / assign_guc_id */ NEW_GUC_ID_ENABLED =2, }; -static int assign_guc_id(struct intel_guc *guc, u16 *out, bool tasklet) +static int assign_guc_id(struct intel_guc *guc, struct intel_context *ce, + bool tasklet) { int ret; lockdep_assert_held(&guc->contexts_lock); + GEM_BUG_ON(intel_context_is_child(ce)); - ret = new_guc_id(guc); + ret = new_guc_id(guc, ce); if (unlikely(ret < 0)) { - ret = steal_guc_id(guc, true); - if (ret >= 0) { - *out = ret; + ret = steal_guc_id(guc, ce, true); + if (!ret) { ret = NEW_GUC_ID_DISABLED; } else if (ret < 0 && tasklet) { /* @@ -1682,15 +1764,18 @@ static int assign_guc_id(struct intel_guc *guc, u16 *out, bool tasklet) * enabled if guc_ids are exhausted and we are submitting * from the tasklet. */ - ret = steal_guc_id(guc, false); - if (ret >= 0) { - *out = ret; + ret = steal_guc_id(guc, ce, false); + if (!ret) ret = NEW_GUC_ID_ENABLED; - } } - } else { - *out = ret; - ret = SAME_GUC_ID; + } + + if (!(ret < 0) && intel_context_is_parent(ce)) { + struct intel_context *child; + int i = 1; + + for_each_child(ce, child) + child->guc_id = ce->guc_id + i++; } return ret; @@ -1703,6 +1788,7 @@ static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce, int ret = 0; unsigned long flags, tries = PIN_GUC_ID_TRIES; + GEM_BUG_ON(intel_context_is_child(ce)); GEM_BUG_ON(atomic_read(&ce->guc_id_ref)); try_again: @@ -1714,7 +1800,7 @@ static int pin_guc_id(struct intel_guc *guc, struct intel_context *ce, } if (context_guc_id_invalid(ce)) { - ret = assign_guc_id(guc, &ce->guc_id, tasklet); + ret = assign_guc_id(guc, ce, tasklet); if (unlikely(ret < 0)) goto out_unlock; } @@ -1760,6 +1846,7 @@ static void unpin_guc_id(struct intel_guc *guc, unsigned long flags; GEM_BUG_ON(atomic_read(&ce->guc_id_ref) < 0); + GEM_BUG_ON(intel_context_is_child(ce)); spin_lock_irqsave(&guc->contexts_lock, flags); @@ -1768,7 +1855,8 @@ static void unpin_guc_id(struct intel_guc *guc, if (!context_guc_id_invalid(ce) && !context_guc_id_stolen(ce) && !atomic_read(&ce->guc_id_ref)) { - struct list_head *head = get_guc_id_list(guc, unpinned); + struct list_head *head = + get_guc_id_list(guc, ce->guc_number_children, unpinned); list_add_tail(&ce->guc_id_link, head); } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h index 7069b7248f55..a5933e07bdd2 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h @@ -22,6 +22,16 @@ struct guc_virtual_engine { /* * Object which encapsulates the globally operated on i915_sched_engine + * the GuC submission state machine described in intel_guc_submission.c. + * + * Currently we have two instances of these per GuC. One for single-lrc and one + * for multi-lrc submission. We split these into two submission engines as they + * can operate in parallel allowing a blocking condition on one not to affect + * the other. i.e. guc_ids are statically allocated between these two submission + * modes. One mode may have guc_ids exhausted which requires blocking while the + * other has plenty of guc_ids and can make forward progres. + * + * In the future if different submission use cases arise we can simply + * instantiate another of these objects and assign it to the context. */ struct guc_submit_engine { struct i915_sched_engine sched_engine; From patchwork Tue Jul 20 20:57:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F98EC636C9 for ; Tue, 20 Jul 2021 20:41:09 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4171C60FEA for ; Tue, 20 Jul 2021 20:41:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4171C60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 52BEA6E536; Tue, 20 Jul 2021 20:40:28 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id E2A5E6E519; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885373" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885373" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906080" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 21/42] drm/i915/guc: Add hang check to GuC submit engine Date: Tue, 20 Jul 2021 13:57:41 -0700 Message-Id: <20210720205802.39610-22-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The heartbeat uses a single instance of a GuC submit engine (GSE) to do the hang check. As such if a different GSE's state machine hangs, the heartbeat cannot detect this hang. Add timer to each GSE which in turn can disable all submissions if it is hung. Cc: John Harrison Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 36 +++++++++++++++++++ .../i915/gt/uc/intel_guc_submission_types.h | 3 ++ 2 files changed, 39 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index d8be5a41d0ca..4cf233d39bea 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -105,15 +105,21 @@ static bool tasklet_blocked(struct guc_submit_engine *gse) return test_bit(GSE_STATE_TASKLET_BLOCKED, &gse->flags); } +/* 2 seconds seems like a reasonable timeout waiting for a G2H */ +#define MAX_TASKLET_BLOCKED_NS 2000000000 static void set_tasklet_blocked(struct guc_submit_engine *gse) { lockdep_assert_held(&gse->sched_engine.lock); + hrtimer_start_range_ns(&gse->hang_timer, + ns_to_ktime(MAX_TASKLET_BLOCKED_NS), 0, + HRTIMER_MODE_REL_PINNED); set_bit(GSE_STATE_TASKLET_BLOCKED, &gse->flags); } static void __clr_tasklet_blocked(struct guc_submit_engine *gse) { lockdep_assert_held(&gse->sched_engine.lock); + hrtimer_cancel(&gse->hang_timer); clear_bit(GSE_STATE_TASKLET_BLOCKED, &gse->flags); } @@ -1021,6 +1027,7 @@ static void disable_submission(struct intel_guc *guc) if (__tasklet_is_enabled(&sched_engine->tasklet)) { GEM_BUG_ON(!guc->ct.enabled); __tasklet_disable_sync_once(&sched_engine->tasklet); + hrtimer_try_to_cancel(&guc->gse[i]->hang_timer); sched_engine->tasklet.callback = NULL; } } @@ -3716,6 +3723,33 @@ static void guc_sched_engine_destroy(struct kref *kref) kfree(gse); } +static enum hrtimer_restart gse_hang(struct hrtimer *hrtimer) +{ + struct guc_submit_engine *gse = + container_of(hrtimer, struct guc_submit_engine, hang_timer); + struct intel_guc *guc = gse->sched_engine.private_data; + +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) + if (guc->gse_hang_expected) + drm_dbg(&guc_to_gt(guc)->i915->drm, + "GSE[%i] hung, disabling submission", gse->id); + else + drm_err(&guc_to_gt(guc)->i915->drm, + "GSE[%i] hung, disabling submission", gse->id); +#else + drm_err(&guc_to_gt(guc)->i915->drm, + "GSE[%i] hung, disabling submission", gse->id); +#endif + + /* + * Tasklet not making forward progress, disable submission which in turn + * will kick in the heartbeat to do a full GPU reset. + */ + disable_submission(guc); + + return HRTIMER_NORESTART; +} + static void guc_submit_engine_init(struct intel_guc *guc, struct guc_submit_engine *gse, int id) @@ -3733,6 +3767,8 @@ static void guc_submit_engine_init(struct intel_guc *guc, sched_engine->retire_inflight_request_prio = guc_retire_inflight_request_prio; sched_engine->private_data = guc; + hrtimer_init(&gse->hang_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + gse->hang_timer.function = gse_hang; gse->id = id; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h index a5933e07bdd2..eae2e9725ede 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission_types.h @@ -6,6 +6,8 @@ #ifndef _INTEL_GUC_SUBMISSION_TYPES_H_ #define _INTEL_GUC_SUBMISSION_TYPES_H_ +#include + #include "gt/intel_engine_types.h" #include "gt/intel_context_types.h" #include "i915_scheduler_types.h" @@ -41,6 +43,7 @@ struct guc_submit_engine { unsigned long flags; int total_num_rq_with_no_guc_id; atomic_t num_guc_ids_not_ready; + struct hrtimer hang_timer; int id; /* From patchwork Tue Jul 20 20:57:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE94BC636C8 for ; Tue, 20 Jul 2021 20:41:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8EDD760FF3 for ; Tue, 20 Jul 2021 20:41:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8EDD760FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DFC0E6E5B0; Tue, 20 Jul 2021 20:40:48 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 644A96E528; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909012" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909012" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906081" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 22/42] drm/i915/guc: Add guc_child_context_destroy Date: Tue, 20 Jul 2021 13:57:42 -0700 Message-Id: <20210720205802.39610-23-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Since child contexts do not own the guc_ids or GuC context registration, child contexts can simply be freed on destroy. Add guc_child_context_destroy context operation to do this. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 4cf233d39bea..429473b4d46c 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -2800,6 +2800,13 @@ static void destroy_worker_func(struct work_struct *w) intel_gt_pm_unpark_work_add(gt, destroy_worker); } +/* Future patches will use this function */ +__attribute__ ((unused)) +static void guc_child_context_destroy(struct kref *kref) +{ + __guc_context_destroy(container_of(kref, struct intel_context, ref)); +} + static void guc_context_destroy(struct kref *kref) { struct intel_context *ce = container_of(kref, typeof(*ce), ref); From patchwork Tue Jul 20 20:57:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53620C07E95 for ; Tue, 20 Jul 2021 20:41:23 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1028160FF3 for ; Tue, 20 Jul 2021 20:41:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1028160FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4F5EB6E595; Tue, 20 Jul 2021 20:40:36 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id CEAC76E517; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421886" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421886" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906082" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 23/42] drm/i915/guc: Implement multi-lrc submission Date: Tue, 20 Jul 2021 13:57:43 -0700 Message-Id: <20210720205802.39610-24-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Implement multi-lrc submission via a single workqueue entry and single H2G. The workqueue entry contains an updated tail value for each request, of all the contexts in the multi-lrc submission, and updates these values simultaneously. As such, the tasklet and bypass path have been updated to coalesce requests into a single submission. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 6 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 224 ++++++++++++++++-- drivers/gpu/drm/i915/i915_request.h | 8 + 3 files changed, 222 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h index 86fb90a4bcfa..820cb6b5d2d0 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h @@ -64,12 +64,14 @@ #define WQ_TYPE_PSEUDO (0x2 << WQ_TYPE_SHIFT) #define WQ_TYPE_INORDER (0x3 << WQ_TYPE_SHIFT) #define WQ_TYPE_NOOP (0x4 << WQ_TYPE_SHIFT) -#define WQ_TARGET_SHIFT 10 +#define WQ_TYPE_MULTI_LRC (0x5 << WQ_TYPE_SHIFT) +#define WQ_TARGET_SHIFT 8 #define WQ_LEN_SHIFT 16 #define WQ_NO_WCFLUSH_WAIT (1 << 27) #define WQ_PRESENT_WORKLOAD (1 << 28) -#define WQ_RING_TAIL_SHIFT 20 +#define WQ_GUC_ID_SHIFT 0 +#define WQ_RING_TAIL_SHIFT 18 #define WQ_RING_TAIL_MAX 0x7FF /* 2^11 QWords */ #define WQ_RING_TAIL_MASK (WQ_RING_TAIL_MAX << WQ_RING_TAIL_SHIFT) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 429473b4d46c..29a7616d3bcf 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -490,6 +490,29 @@ __get_process_desc(struct intel_context *ce) LRC_STATE_OFFSET) / sizeof(u32))); } +static inline u32 *get_wq_pointer(struct guc_process_desc *desc, + struct intel_context *ce, + u32 wqi_size) +{ + /* + * Check for space in work queue. Caching a value of head pointer in + * intel_context structure in order reduce the number accesses to shared + * GPU memory which may be across a PCIe bus. + */ +#define AVAILABLE_SPACE \ + CIRC_SPACE(ce->guc_wqi_tail, ce->guc_wqi_head, GUC_WQ_SIZE) + if (AVAILABLE_SPACE < wqi_size) { + ce->guc_wqi_head = READ_ONCE(desc->head); + + if (AVAILABLE_SPACE < wqi_size) + return NULL; + } +#undef AVAILABLE_SPACE + + return ((u32 *)__get_process_desc(ce)) + + ((WQ_OFFSET + ce->guc_wqi_tail) / sizeof(u32)); +} + static u32 __get_lrc_desc_offset(struct intel_guc *guc, int index) { GEM_BUG_ON(index >= guc->lrcd_reg.max_idx); @@ -640,7 +663,7 @@ static inline bool request_has_no_guc_id(struct i915_request *rq) static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq) { int err = 0; - struct intel_context *ce = rq->context; + struct intel_context *ce = request_to_scheduling_context(rq); u32 action[3]; int len = 0; u32 g2h_len_dw = 0; @@ -690,6 +713,18 @@ static int __guc_add_request(struct intel_guc *guc, struct i915_request *rq) trace_intel_context_sched_enable(ce); atomic_inc(&guc->outstanding_submission_g2h); set_context_enabled(ce); + + /* + * Without multi-lrc KMD does the submission step (moving the + * lrc tail) so enabling scheduling is sufficient to submit the + * context. This isn't the case in multi-lrc submission as the + * GuC needs to move the tails, hence the need for another H2G + * to submit a multi-lrc context after enabling scheduling. + */ + if (intel_context_is_parent(ce)) { + action[0] = INTEL_GUC_ACTION_SCHED_CONTEXT; + err = intel_guc_send_nb(guc, action, len - 1, 0); + } } else if (!enabled) { clr_context_pending_enable(ce); intel_context_put(ce); @@ -764,7 +799,6 @@ static int tasklet_register_context(struct guc_submit_engine *gse, return ret; } - static inline void guc_set_lrc_tail(struct i915_request *rq) { rq->context->lrc_reg_state[CTX_RING_TAIL] = @@ -776,6 +810,131 @@ static inline int rq_prio(const struct i915_request *rq) return rq->sched.attr.priority; } +static inline bool is_multi_lrc_rq(struct i915_request *rq) +{ + return intel_context_is_child(rq->context) || + intel_context_is_parent(rq->context); +} + +/* + * Multi-lrc requests are not submitted to the GuC until all requests in + * the set are ready. With the exception of the last request in the set, + * submitting a multi-lrc request is therefore just a status update on + * the driver-side and can be safely merged with other requests. When the + * last multi-lrc request in a set is detected, we break out of the + * submission loop and submit the whole set, thus we never attempt to + * merge that one with othe requests. + */ +static inline bool can_merge_rq(struct i915_request *rq, + struct i915_request *last) +{ + return is_multi_lrc_rq(last) || rq->context == last->context; +} + +static inline u32 wq_space_until_wrap(struct intel_context *ce) +{ + return (GUC_WQ_SIZE - ce->guc_wqi_tail); +} + +static inline void write_wqi(struct guc_process_desc *desc, + struct intel_context *ce, + u32 wqi_size) +{ + ce->guc_wqi_tail = (ce->guc_wqi_tail + wqi_size) & (GUC_WQ_SIZE - 1); + WRITE_ONCE(desc->tail, ce->guc_wqi_tail); +} + +static inline int guc_wq_noop_append(struct intel_context *ce) +{ + struct guc_process_desc *desc = __get_process_desc(ce); + u32 *wqi = get_wq_pointer(desc, ce, wq_space_until_wrap(ce)); + + if (!wqi) + return -EBUSY; + + *wqi = WQ_TYPE_NOOP | + ((wq_space_until_wrap(ce) / sizeof(u32) - 1) << WQ_LEN_SHIFT); + ce->guc_wqi_tail = 0; + + return 0; +} + +static int __guc_wq_item_append(struct i915_request *rq) +{ + struct intel_context *ce = request_to_scheduling_context(rq); + struct intel_context *child; + struct guc_process_desc *desc = __get_process_desc(ce); + unsigned int wqi_size = (ce->guc_number_children + 4) * + sizeof(u32); + u32 *wqi; + int ret; + + /* Ensure context is in correct state updating work queue */ + GEM_BUG_ON(ce->guc_num_rq_submit_no_id); + GEM_BUG_ON(request_has_no_guc_id(rq)); + GEM_BUG_ON(!atomic_read(&ce->guc_id_ref)); + GEM_BUG_ON(context_guc_id_invalid(ce)); + GEM_BUG_ON(context_pending_disable(ce)); + GEM_BUG_ON(context_wait_for_deregister_to_register(ce)); + + /* Insert NOOP if this work queue item will wrap the tail pointer. */ + if (wqi_size > wq_space_until_wrap(ce)) { + ret = guc_wq_noop_append(ce); + if (ret) + return ret; + } + + wqi = get_wq_pointer(desc, ce, wqi_size); + if (!wqi) + return -EBUSY; + + *wqi++ = WQ_TYPE_MULTI_LRC | + ((wqi_size / sizeof(u32) - 1) << WQ_LEN_SHIFT); + *wqi++ = ce->lrc.lrca; + *wqi++ = (ce->guc_id << WQ_GUC_ID_SHIFT) | + ((ce->ring->tail / sizeof(u64)) << WQ_RING_TAIL_SHIFT); + *wqi++ = 0; /* fence_id */ + for_each_child(ce, child) + *wqi++ = child->ring->tail / sizeof(u64); + + write_wqi(desc, ce, wqi_size); + + return 0; +} + +static int gse_wq_item_append(struct guc_submit_engine *gse, + struct i915_request *rq) +{ + struct intel_context *ce = request_to_scheduling_context(rq); + int ret = 0; + + if (likely(!intel_context_is_banned(ce))) { + ret = __guc_wq_item_append(rq); + + if (unlikely(ret == -EBUSY)) { + gse->stalled_rq = rq; + gse->submission_stall_reason = STALL_MOVE_LRC_TAIL; + } + } + + return ret; +} + +static inline bool multi_lrc_submit(struct i915_request *rq) +{ + struct intel_context *ce = request_to_scheduling_context(rq); + intel_ring_set_tail(rq->ring, rq->tail); + + /* + * We expect the front end (execbuf IOCTL) to set this flag on the last + * request generated from a multi-BB submission. This indicates to the + * backend (GuC interface) that we should submit this context thus + * submitting all the requests generated in parallel. + */ + return test_bit(I915_FENCE_FLAG_SUBMIT_PARALLEL, &rq->fence.flags) || + intel_context_is_banned(ce); +} + static void kick_retire_wq(struct guc_submit_engine *gse) { queue_work(system_unbound_wq, &gse->retire_worker); @@ -819,7 +978,7 @@ static int gse_dequeue_one_context(struct guc_submit_engine *gse) struct i915_request *rq, *rn; priolist_for_each_request_consume(rq, rn, p) { - if (last && rq->context != last->context) + if (last && !can_merge_rq(rq, last)) goto done; list_del_init(&rq->sched.link); @@ -828,7 +987,22 @@ static int gse_dequeue_one_context(struct guc_submit_engine *gse) trace_i915_request_in(rq, 0); last = rq; - submit = true; + + if (is_multi_lrc_rq(rq)) { + /* + * We need to coalesce all multi-lrc + * requests in a relationship into a + * single H2G. We are guaranteed that + * all of these requests will be + * submitted sequentially. + */ + if (multi_lrc_submit(rq)) { + submit = true; + goto done; + } + } else { + submit = true; + } } rb_erase_cached(&p->node, &sched_engine->queue); @@ -838,7 +1012,7 @@ static int gse_dequeue_one_context(struct guc_submit_engine *gse) done: if (submit) { - struct intel_context *ce = last->context; + struct intel_context *ce = request_to_scheduling_context(last); if (ce->guc_num_rq_submit_no_id) { ret = tasklet_pin_guc_id(gse, last); @@ -860,7 +1034,17 @@ static int gse_dequeue_one_context(struct guc_submit_engine *gse) } move_lrc_tail: - guc_set_lrc_tail(last); + if (is_multi_lrc_rq(last)) { + ret = gse_wq_item_append(gse, last); + if (ret == -EBUSY) + goto schedule_tasklet; + else if (ret != 0) { + GEM_WARN_ON(ret); /* Unexpected */ + goto deadlk; + } + } else { + guc_set_lrc_tail(last); + } add_request: ret = gse_add_request(gse, last); @@ -1565,14 +1749,22 @@ static bool need_tasklet(struct guc_submit_engine *gse, struct intel_context *ce static int gse_bypass_tasklet_submit(struct guc_submit_engine *gse, struct i915_request *rq) { - int ret; + int ret = 0; __i915_request_submit(rq); trace_i915_request_in(rq, 0); - guc_set_lrc_tail(rq); - ret = gse_add_request(gse, rq); + if (is_multi_lrc_rq(rq)) { + if (multi_lrc_submit(rq)) { + ret = gse_wq_item_append(gse, rq); + if (!ret) + ret = gse_add_request(gse, rq); + } + } else { + guc_set_lrc_tail(rq); + ret = gse_add_request(gse, rq); + } if (unlikely(ret == -EPIPE)) disable_submission(gse->sched_engine.private_data); @@ -1589,7 +1781,7 @@ static void guc_submit_request(struct i915_request *rq) /* Will be called from irq-context when using foreign fences. */ spin_lock_irqsave(&sched_engine->lock, flags); - if (need_tasklet(gse, rq->context)) + if (need_tasklet(gse, request_to_scheduling_context(rq))) queue_request(sched_engine, rq, rq_prio(rq)); else if (gse_bypass_tasklet_submit(gse, rq) == -EBUSY) kick_tasklet(gse); @@ -2957,9 +3149,10 @@ static inline bool new_guc_prio_higher(u8 old_guc_prio, u8 new_guc_prio) static void add_to_context(struct i915_request *rq) { - struct intel_context *ce = rq->context; + struct intel_context *ce = request_to_scheduling_context(rq); u8 new_guc_prio = map_i915_prio_to_guc_prio(rq_prio(rq)); + GEM_BUG_ON(intel_context_is_child(ce)); GEM_BUG_ON(rq->guc_prio == GUC_PRIO_FINI); spin_lock(&ce->guc_active.lock); @@ -2993,7 +3186,9 @@ static void guc_prio_fini(struct i915_request *rq, struct intel_context *ce) static void remove_from_context(struct i915_request *rq) { - struct intel_context *ce = rq->context; + struct intel_context *ce = request_to_scheduling_context(rq); + + GEM_BUG_ON(intel_context_is_child(ce)); spin_lock_irq(&ce->guc_active.lock); @@ -3197,7 +3392,8 @@ static int tasklet_pin_guc_id(struct guc_submit_engine *gse, GEM_BUG_ON(gse->total_num_rq_with_no_guc_id < 0); list_for_each_entry_reverse(rq, &ce->guc_active.requests, sched.link) - if (request_has_no_guc_id(rq)) { + if (request_has_no_guc_id(rq) && + request_to_scheduling_context(rq) == ce) { --ce->guc_num_rq_submit_no_id; clear_bit(I915_FENCE_FLAG_GUC_ID_NOT_PINNED, &rq->fence.flags); @@ -3517,7 +3713,7 @@ static void guc_bump_inflight_request_prio(struct i915_request *rq, static void guc_retire_inflight_request_prio(struct i915_request *rq) { - struct intel_context *ce = rq->context; + struct intel_context *ce = request_to_scheduling_context(rq); spin_lock(&ce->guc_active.lock); guc_prio_fini(rq, ce); diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index 5f304fd02071..ad3ec638d28b 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -145,6 +145,14 @@ enum { * tasklet that the guc_id isn't pinned. */ I915_FENCE_FLAG_GUC_ID_NOT_PINNED, + + /* + * I915_FENCE_FLAG_SUBMIT_PARALLEL - request with a context in a + * parent-child relationship (parallel submission, multi-lrc) should + * trigger a submission to the GuC rather than just moving the context + * tail. + */ + I915_FENCE_FLAG_SUBMIT_PARALLEL, }; /** From patchwork Tue Jul 20 20:57:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D14BC6377A for ; Tue, 20 Jul 2021 20:41:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 38BDC60FEA for ; Tue, 20 Jul 2021 20:41:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38BDC60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E34956E58E; Tue, 20 Jul 2021 20:40:35 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id F41C76E51A; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885374" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885374" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906083" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 24/42] drm/i915/guc: Insert submit fences between requests in parent-child relationship Date: Tue, 20 Jul 2021 13:57:44 -0700 Message-Id: <20210720205802.39610-25-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The GuC must receive requests in the order submitted for contexts in a parent-child relationship to function correctly. To ensure this, insert a submit fence between the current request and last request submitted for requests / contexts in a parent child relationship. This is conceptually similar to a single timeline. Signed-off-by: Matthew Brost Cc: John Harrison --- drivers/gpu/drm/i915/gt/intel_context.c | 2 + drivers/gpu/drm/i915/gt/intel_context.h | 5 + drivers/gpu/drm/i915/gt/intel_context_types.h | 3 + .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 +- drivers/gpu/drm/i915/i915_request.c | 119 ++++++++++++++---- 5 files changed, 105 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 545bedd43509..60dd77d5f31d 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -486,6 +486,8 @@ void intel_context_fini(struct intel_context *ce) { struct intel_context *child, *next; + if (ce->last_rq) + i915_request_put(ce->last_rq); if (ce->timeline) intel_timeline_put(ce->timeline); i915_vm_put(ce->vm); diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index e18f16de1158..0c0a276aedb6 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -57,6 +57,11 @@ intel_context_to_parent(struct intel_context *ce) return intel_context_is_child(ce) ? ce->parent : ce; } +static inline bool intel_context_is_parallel(struct intel_context *ce) +{ + return intel_context_is_child(ce) || intel_context_is_parent(ce); +} + void intel_context_bind_parent_child(struct intel_context *parent, struct intel_context *child); diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 615b8f1d723d..5f165851e0df 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -225,6 +225,9 @@ struct intel_context { */ u8 guc_prio; u32 guc_prio_count[GUC_CLIENT_PRIORITY_NUM]; + + /* Last request submitted on a parent */ + struct i915_request *last_rq; }; #endif /* __INTEL_CONTEXT_TYPES__ */ diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 29a7616d3bcf..a7a9817a86bc 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -812,8 +812,7 @@ static inline int rq_prio(const struct i915_request *rq) static inline bool is_multi_lrc_rq(struct i915_request *rq) { - return intel_context_is_child(rq->context) || - intel_context_is_parent(rq->context); + return intel_context_is_parallel(rq->context); } /* diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 329c61595f02..b998ed6723b9 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1553,36 +1553,62 @@ i915_request_await_object(struct i915_request *to, return ret; } +static inline bool is_parallel_rq(struct i915_request *rq) +{ + return intel_context_is_parallel(rq->context); +} + +static inline struct intel_context *request_to_parent(struct i915_request *rq) +{ + return intel_context_to_parent(rq->context); +} + + static struct i915_request * +__i915_request_ensure_parallel_ordering(struct i915_request *rq, + struct intel_timeline *timeline) +{ + struct i915_request *prev; + + GEM_BUG_ON(!is_parallel_rq(rq)); + + prev = request_to_parent(rq)->last_rq; + if (prev) { + if (!__i915_request_is_complete(prev)) { + i915_sw_fence_await_sw_fence(&rq->submit, + &prev->submit, + &rq->submitq); + + if (rq->engine->sched_engine->schedule) + __i915_sched_node_add_dependency(&rq->sched, + &prev->sched, + &rq->dep, + 0); + } + i915_request_put(prev); + } + + request_to_parent(rq)->last_rq = i915_request_get(rq); + + return to_request(__i915_active_fence_set(&timeline->last_request, + &rq->fence)); +} + static struct i915_request * -__i915_request_add_to_timeline(struct i915_request *rq) +__i915_request_ensure_ordering(struct i915_request *rq, + struct intel_timeline *timeline) { - struct intel_timeline *timeline = i915_request_timeline(rq); struct i915_request *prev; - /* - * Dependency tracking and request ordering along the timeline - * is special cased so that we can eliminate redundant ordering - * operations while building the request (we know that the timeline - * itself is ordered, and here we guarantee it). - * - * As we know we will need to emit tracking along the timeline, - * we embed the hooks into our request struct -- at the cost of - * having to have specialised no-allocation interfaces (which will - * be beneficial elsewhere). - * - * A second benefit to open-coding i915_request_await_request is - * that we can apply a slight variant of the rules specialised - * for timelines that jump between engines (such as virtual engines). - * If we consider the case of virtual engine, we must emit a dma-fence - * to prevent scheduling of the second request until the first is - * complete (to maximise our greedy late load balancing) and this - * precludes optimising to use semaphores serialisation of a single - * timeline across engines. - */ + GEM_BUG_ON(is_parallel_rq(rq)); + prev = to_request(__i915_active_fence_set(&timeline->last_request, &rq->fence)); + if (prev && !__i915_request_is_complete(prev)) { bool uses_guc = intel_engine_uses_guc(rq->engine); + bool pow2 = is_power_of_2(READ_ONCE(prev->engine)->mask | + rq->engine->mask); + bool same_context = prev->context == rq->context; /* * The requests are supposed to be kept in order. However, @@ -1590,12 +1616,11 @@ __i915_request_add_to_timeline(struct i915_request *rq) * is used as a barrier for external modification to this * context. */ - GEM_BUG_ON(prev->context == rq->context && + GEM_BUG_ON(same_context && i915_seqno_passed(prev->fence.seqno, rq->fence.seqno)); - if ((!uses_guc && is_power_of_2(READ_ONCE(prev->engine)->mask | rq->engine->mask)) || - (uses_guc && prev->context == rq->context)) + if ((same_context && uses_guc) || (!uses_guc && pow2)) i915_sw_fence_await_sw_fence(&rq->submit, &prev->submit, &rq->submitq); @@ -1610,6 +1635,50 @@ __i915_request_add_to_timeline(struct i915_request *rq) 0); } + return prev; +} + +static struct i915_request * +__i915_request_add_to_timeline(struct i915_request *rq) +{ + struct intel_timeline *timeline = i915_request_timeline(rq); + struct i915_request *prev; + + /* + * Dependency tracking and request ordering along the timeline + * is special cased so that we can eliminate redundant ordering + * operations while building the request (we know that the timeline + * itself is ordered, and here we guarantee it). + * + * As we know we will need to emit tracking along the timeline, + * we embed the hooks into our request struct -- at the cost of + * having to have specialised no-allocation interfaces (which will + * be beneficial elsewhere). + * + * A second benefit to open-coding i915_request_await_request is + * that we can apply a slight variant of the rules specialised + * for timelines that jump between engines (such as virtual engines). + * If we consider the case of virtual engine, we must emit a dma-fence + * to prevent scheduling of the second request until the first is + * complete (to maximise our greedy late load balancing) and this + * precludes optimising to use semaphores serialisation of a single + * timeline across engines. + * + * We do not order parallel submission requests on the timeline as each + * parallel submission context has its own timeline and the ordering + * rules for parallel requests are that they must be submitted in the + * order received from the execbuf IOCTL. So rather than using the + * timeline we store a pointer to last request submitted in the + * relationship in the gem context and insert a submission fence + * between that request and request passed into this function or + * alternatively we use completion fence if gem context has a single + * timeline and this is the first submission of an execbuf IOCTL. + */ + if (likely(!is_parallel_rq(rq))) + prev = __i915_request_ensure_ordering(rq, timeline); + else + prev = __i915_request_ensure_parallel_ordering(rq, timeline); + /* * Make sure that no request gazumped us - if it was allocated after * our i915_request_alloc() and called __i915_request_add() before From patchwork Tue Jul 20 20:57:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F819C636C9 for ; Tue, 20 Jul 2021 20:41:50 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6141B60FEA for ; Tue, 20 Jul 2021 20:41:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6141B60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7C17F6E5CC; Tue, 20 Jul 2021 20:41:02 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8BE036E52D; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909013" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909013" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906084" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 25/42] drm/i915/guc: Implement multi-lrc reset Date: Tue, 20 Jul 2021 13:57:45 -0700 Message-Id: <20210720205802.39610-26-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Update context and full GPU reset to work with multi-lrc. The idea is parent context tracks all the active requests inflight for itself and its' children. The parent context owns the reset replaying / canceling requests as needed. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context.c | 12 +++-- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 54 ++++++++++++++----- 2 files changed, 47 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 60dd77d5f31d..7395894724f8 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -605,20 +605,22 @@ struct i915_request *intel_context_create_request(struct intel_context *ce) struct i915_request *intel_context_find_active_request(struct intel_context *ce) { + struct intel_context *parent = intel_context_is_child(ce) ? + ce->parent : ce; struct i915_request *rq, *active = NULL; unsigned long flags; GEM_BUG_ON(!intel_engine_uses_guc(ce->engine)); - spin_lock_irqsave(&ce->guc_active.lock, flags); - list_for_each_entry_reverse(rq, &ce->guc_active.requests, + spin_lock_irqsave(&parent->guc_active.lock, flags); + list_for_each_entry_reverse(rq, &parent->guc_active.requests, sched.link) { - if (i915_request_completed(rq)) + if (i915_request_completed(rq) && rq->context == ce) break; - active = rq; + active = (rq->context == ce) ? rq : active; } - spin_unlock_irqrestore(&ce->guc_active.lock, flags); + spin_unlock_irqrestore(&parent->guc_active.lock, flags); return active; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index a7a9817a86bc..8db1d56e34e2 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -810,6 +810,11 @@ static inline int rq_prio(const struct i915_request *rq) return rq->sched.attr.priority; } +static inline bool is_multi_lrc(struct intel_context *ce) +{ + return intel_context_is_parallel(ce); +} + static inline bool is_multi_lrc_rq(struct i915_request *rq) { return intel_context_is_parallel(rq->context); @@ -1371,6 +1376,8 @@ __unwind_incomplete_requests(struct intel_context *ce) ce->engine->sched_engine; unsigned long flags; + GEM_BUG_ON(intel_context_is_child(ce)); + spin_lock_irqsave(&sched_engine->lock, flags); spin_lock(&ce->guc_active.lock); list_for_each_entry_safe(rq, rn, @@ -1403,8 +1410,12 @@ __unwind_incomplete_requests(struct intel_context *ce) static void __guc_reset_context(struct intel_context *ce, bool stalled) { + bool local_stalled; struct i915_request *rq; u32 head; + int i, number_children = ce->guc_number_children; + + GEM_BUG_ON(intel_context_is_child(ce)); intel_context_get(ce); @@ -1416,22 +1427,32 @@ static void __guc_reset_context(struct intel_context *ce, bool stalled) */ clr_context_enabled(ce); - rq = intel_context_find_active_request(ce); - if (!rq) { - head = ce->ring->tail; - stalled = false; - goto out_replay; - } + for (i = 0; i < number_children + 1; ++i) { + if (!intel_context_is_pinned(ce)) + goto next_context; + + local_stalled = false; + rq = intel_context_find_active_request(ce); + if (!rq) { + head = ce->ring->tail; + goto out_replay; + } - if (!i915_request_started(rq)) - stalled = false; + GEM_BUG_ON(i915_active_is_idle(&ce->active)); + head = intel_ring_wrap(ce->ring, rq->head); - GEM_BUG_ON(i915_active_is_idle(&ce->active)); - head = intel_ring_wrap(ce->ring, rq->head); - __i915_request_reset(rq, stalled); + if (i915_request_started(rq)) + local_stalled = true; + __i915_request_reset(rq, local_stalled && stalled); out_replay: - guc_reset_state(ce, head, stalled); + guc_reset_state(ce, head, local_stalled && stalled); +next_context: + if (i != number_children) + ce = list_next_entry(ce, guc_child_link); + } + ce = intel_context_to_parent(ce); + __unwind_incomplete_requests(ce); ce->guc_num_rq_submit_no_id = 0; intel_context_put(ce); @@ -1447,9 +1468,11 @@ void intel_guc_submission_reset(struct intel_guc *guc, bool stalled) return; xa_for_each(&guc->context_lookup, index, ce) - if (intel_context_is_pinned(ce)) + if (intel_context_is_pinned(ce) && + !intel_context_is_child(ce)) __guc_reset_context(ce, stalled); + /* GuC is blown away, drop all references to contexts */ xa_destroy(&guc->context_lookup); } @@ -1532,7 +1555,8 @@ void intel_guc_submission_cancel_requests(struct intel_guc *guc) int i; xa_for_each(&guc->context_lookup, index, ce) - if (intel_context_is_pinned(ce)) + if (intel_context_is_pinned(ce) && + !intel_context_is_child(ce)) guc_cancel_context_requests(ce); for (i = 0; i < GUC_SUBMIT_ENGINE_MAX; ++i) @@ -2792,6 +2816,8 @@ static void guc_context_ban(struct intel_context *ce, struct i915_request *rq) intel_wakeref_t wakeref; unsigned long flags; + GEM_BUG_ON(intel_context_is_child(ce)); + gse_flush_submissions(ce_to_gse(ce)); spin_lock_irqsave(&ce->guc_state.lock, flags); From patchwork Tue Jul 20 20:57:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F2D5C636C8 for ; Tue, 20 Jul 2021 20:41:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E94E860FEA for ; Tue, 20 Jul 2021 20:41:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E94E860FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2A14C6E5B2; Tue, 20 Jul 2021 20:40:49 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 03F886E51D; Tue, 20 Jul 2021 20:40:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421887" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421887" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906085" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 26/42] drm/i915/guc: Update debugfs for GuC multi-lrc Date: Tue, 20 Jul 2021 13:57:46 -0700 Message-Id: <20210720205802.39610-27-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Display the workqueue status in debugfs for GuC contexts that are in parent-child relationship. Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 56 +++++++++++++------ 1 file changed, 39 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 8db1d56e34e2..81a5439de924 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -4489,31 +4489,53 @@ void intel_guc_submission_print_info(struct intel_guc *guc, gse_log_submission_info(guc->gse[i], p, i); } +static inline void guc_log_context(struct drm_printer *p, + struct intel_context *ce) +{ + drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id); + drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca); + drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n", + ce->ring->head, + ce->lrc_reg_state[CTX_RING_HEAD]); + drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n", + ce->ring->tail, + ce->lrc_reg_state[CTX_RING_TAIL]); + drm_printf(p, "\t\tContext Pin Count: %u\n", + atomic_read(&ce->pin_count)); + drm_printf(p, "\t\tGuC ID Ref Count: %u\n", + atomic_read(&ce->guc_id_ref)); + drm_printf(p, "\t\tNumber Requests Not Ready: %u\n", + atomic_read(&ce->guc_num_rq_not_ready)); + drm_printf(p, "\t\tSchedule State: 0x%x, 0x%x\n\n", + ce->guc_state.sched_state, + atomic_read(&ce->guc_sched_state_no_lock)); +} + void intel_guc_submission_print_context_info(struct intel_guc *guc, struct drm_printer *p) { struct intel_context *ce; unsigned long index; xa_for_each(&guc->context_lookup, index, ce) { - drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id); - drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca); - drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n", - ce->ring->head, - ce->lrc_reg_state[CTX_RING_HEAD]); - drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n", - ce->ring->tail, - ce->lrc_reg_state[CTX_RING_TAIL]); - drm_printf(p, "\t\tContext Pin Count: %u\n", - atomic_read(&ce->pin_count)); - drm_printf(p, "\t\tGuC ID Ref Count: %u\n", - atomic_read(&ce->guc_id_ref)); - drm_printf(p, "\t\tNumber Requests Not Ready: %u\n", - atomic_read(&ce->guc_num_rq_not_ready)); - drm_printf(p, "\t\tSchedule State: 0x%x, 0x%x\n\n", - ce->guc_state.sched_state, - atomic_read(&ce->guc_sched_state_no_lock)); + GEM_BUG_ON(intel_context_is_child(ce)); + guc_log_context(p, ce); guc_log_context_priority(p, ce); + + if (intel_context_is_parent(ce)) { + struct guc_process_desc *desc = __get_process_desc(ce); + struct intel_context *child; + + drm_printf(p, "\t\tWQI Head: %u\n", + READ_ONCE(desc->head)); + drm_printf(p, "\t\tWQI Tail: %u\n", + READ_ONCE(desc->tail)); + drm_printf(p, "\t\tWQI Status: %u\n\n", + READ_ONCE(desc->wq_status)); + + for_each_child(ce, child) + guc_log_context(p, child); + } } } From patchwork Tue Jul 20 20:57:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88804C636CE for ; Tue, 20 Jul 2021 20:41:21 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5C2E260FF3 for ; Tue, 20 Jul 2021 20:41:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C2E260FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A528F6E58A; Tue, 20 Jul 2021 20:40:35 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 189D26E520; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885375" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885375" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906086" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 27/42] drm/i915: Connect UAPI to GuC multi-lrc interface Date: Tue, 20 Jul 2021 13:57:47 -0700 Message-Id: <20210720205802.39610-28-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce 'set parallel submit' extension to connect UAPI to GuC multi-lrc interface. Kernel doc in new uAPI should explain it all. Cc: Tvrtko Ursulin Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 157 +++++++++++++++++- .../gpu/drm/i915/gem/i915_gem_context_types.h | 6 + drivers/gpu/drm/i915/gt/intel_context_types.h | 8 +- drivers/gpu/drm/i915/gt/intel_engine.h | 12 +- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 6 +- .../drm/i915/gt/intel_execlists_submission.c | 6 +- drivers/gpu/drm/i915/gt/selftest_execlists.c | 12 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 111 +++++++++++-- include/uapi/drm/i915_drm.h | 118 +++++++++++++ 9 files changed, 407 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index d87a4c6da5bc..805fbcde03ea 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -519,9 +519,149 @@ set_proto_ctx_engines_bond(struct i915_user_extension __user *base, void *data) return 0; } +static int +set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base, + void *data) +{ + struct i915_context_engines_parallel_submit __user *ext = + container_of_user(base, typeof(*ext), base); + const struct set_proto_ctx_engines *set = data; + struct drm_i915_private *i915 = set->i915; + u64 flags; + int err = 0, n, i, j; + u16 slot, width, num_siblings; + struct intel_engine_cs **siblings = NULL; + intel_engine_mask_t prev_mask; + + /* Disabling for now */ + return -ENODEV; + + if (!(intel_uc_uses_guc_submission(&i915->gt.uc))) + return -ENODEV; + + if (get_user(slot, &ext->engine_index)) + return -EFAULT; + + if (get_user(width, &ext->width)) + return -EFAULT; + + if (get_user(num_siblings, &ext->num_siblings)) + return -EFAULT; + + if (slot >= set->num_engines) { + drm_dbg(&i915->drm, "Invalid placement value, %d >= %d\n", + slot, set->num_engines); + return -EINVAL; + } + + if (set->engines[slot].type != I915_GEM_ENGINE_TYPE_INVALID) { + drm_dbg(&i915->drm, + "Invalid placement[%d], already occupied\n", slot); + return -EINVAL; + } + + if (get_user(flags, &ext->flags)) + return -EFAULT; + + if (flags) { + drm_dbg(&i915->drm, "Unknown flags 0x%02llx", flags); + return -EINVAL; + } + + for (n = 0; n < ARRAY_SIZE(ext->mbz64); n++) { + err = check_user_mbz(&ext->mbz64[n]); + if (err) + return err; + } + + if (width < 2) { + drm_dbg(&i915->drm, "Width (%d) < 2 \n", width); + return -EINVAL; + } + + if (num_siblings < 1) { + drm_dbg(&i915->drm, "Number siblings (%d) < 1 \n", + num_siblings); + return -EINVAL; + } + + siblings = kmalloc_array(num_siblings * width, + sizeof(*siblings), + GFP_KERNEL); + if (!siblings) + return -ENOMEM; + + /* Create contexts / engines */ + for (i = 0; i < width; ++i) { + intel_engine_mask_t current_mask = 0; + struct i915_engine_class_instance prev_engine; + + for (j = 0; j < num_siblings; ++j) { + struct i915_engine_class_instance ci; + n = i * num_siblings + j; + + if (copy_from_user(&ci, &ext->engines[n], sizeof(ci))) { + err = -EFAULT; + goto out_err; + } + + siblings[n] = + intel_engine_lookup_user(i915, ci.engine_class, + ci.engine_instance); + if (!siblings[n]) { + drm_dbg(&i915->drm, + "Invalid sibling[%d]: { class:%d, inst:%d }\n", + n, ci.engine_class, ci.engine_instance); + err = -EINVAL; + goto out_err; + } + + if (n) { + if (prev_engine.engine_class != + ci.engine_class) { + drm_dbg(&i915->drm, + "Mismatched class %d, %d\n", + prev_engine.engine_class, + ci.engine_class); + err = -EINVAL; + goto out_err; + } + } + + prev_engine = ci; + current_mask |= siblings[n]->logical_mask; + } + + if (i > 0) { + if (current_mask != prev_mask << 1) { + drm_dbg(&i915->drm, + "Non contiguous logical mask 0x%x, 0x%x\n", + prev_mask, current_mask); + err = -EINVAL; + goto out_err; + } + } + prev_mask = current_mask; + } + + set->engines[slot].type = I915_GEM_ENGINE_TYPE_PARALLEL; + set->engines[slot].num_siblings = num_siblings; + set->engines[slot].width = width; + set->engines[slot].siblings = siblings; + + return 0; + +out_err: + kfree(siblings); + + return err; +} + static const i915_user_extension_fn set_proto_ctx_engines_extensions[] = { [I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE] = set_proto_ctx_engines_balance, [I915_CONTEXT_ENGINES_EXT_BOND] = set_proto_ctx_engines_bond, + [I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT] = + set_proto_ctx_engines_parallel_submit, }; static int set_proto_ctx_engines(struct drm_i915_file_private *fpriv, @@ -942,7 +1082,7 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx, e = alloc_engines(num_engines); for (n = 0; n < num_engines; n++) { - struct intel_context *ce; + struct intel_context *ce, *child; int ret; switch (pe[n].type) { @@ -952,7 +1092,13 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx, case I915_GEM_ENGINE_TYPE_BALANCED: ce = intel_engine_create_virtual(pe[n].siblings, - pe[n].num_siblings); + pe[n].num_siblings, 0); + break; + + case I915_GEM_ENGINE_TYPE_PARALLEL: + ce = intel_engine_create_parallel(pe[n].siblings, + pe[n].num_siblings, + pe[n].width); break; case I915_GEM_ENGINE_TYPE_INVALID: @@ -973,6 +1119,13 @@ static struct i915_gem_engines *user_engines(struct i915_gem_context *ctx, err = ERR_PTR(ret); goto free_engines; } + for_each_child(ce, child) { + ret = intel_context_set_gem(child, ctx, pe->sseu); + if (ret) { + err = ERR_PTR(ret); + goto free_engines; + } + } } e->num_engines = num_engines; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h index 94c03a97cb77..7b096d83bca1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h @@ -78,6 +78,9 @@ enum i915_gem_engine_type { /** @I915_GEM_ENGINE_TYPE_BALANCED: A load-balanced engine set */ I915_GEM_ENGINE_TYPE_BALANCED, + + /** @I915_GEM_ENGINE_TYPE_PARALLEL: A parallel engine set */ + I915_GEM_ENGINE_TYPE_PARALLEL, }; /** @@ -108,6 +111,9 @@ struct i915_gem_proto_engine { /** @num_siblings: Number of balanced siblings */ unsigned int num_siblings; + /** @width: Width of each sibling */ + unsigned int width; + /** @siblings: Balanced siblings */ struct intel_engine_cs **siblings; diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 5f165851e0df..5912eb291bad 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -55,9 +55,13 @@ struct intel_context_ops { void (*reset)(struct intel_context *ce); void (*destroy)(struct kref *kref); - /* virtual engine/context interface */ + /* virtual/parallel engine/context interface */ struct intel_context *(*create_virtual)(struct intel_engine_cs **engine, - unsigned int count); + unsigned int count, + unsigned long flags); + struct intel_context *(*create_parallel)(struct intel_engine_cs **engines, + unsigned int num_siblings, + unsigned int width); struct intel_engine_cs *(*get_sibling)(struct intel_engine_cs *engine, unsigned int sibling); }; diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h index 04ce7a3e1d82..38e8b9c2567e 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine.h +++ b/drivers/gpu/drm/i915/gt/intel_engine.h @@ -279,9 +279,19 @@ intel_engine_has_preempt_reset(const struct intel_engine_cs *engine) return intel_engine_has_preemption(engine); } +#define FORCE_VIRTUAL BIT(0) struct intel_context * intel_engine_create_virtual(struct intel_engine_cs **siblings, - unsigned int count); + unsigned int count, unsigned long flags); + +static inline struct intel_context * +intel_engine_create_parallel(struct intel_engine_cs **engines, + unsigned int num_engines, + unsigned int width) +{ + GEM_BUG_ON(!engines[0]->cops->create_parallel); + return engines[0]->cops->create_parallel(engines, num_engines, width); +} static inline bool intel_virtual_engine_has_heartbeat(const struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index 1ce112fefcf5..bbc5df0ce534 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -1866,16 +1866,16 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, ktime_t *now) struct intel_context * intel_engine_create_virtual(struct intel_engine_cs **siblings, - unsigned int count) + unsigned int count, unsigned long flags) { if (count == 0) return ERR_PTR(-EINVAL); - if (count == 1) + if (count == 1 && !(flags & FORCE_VIRTUAL)) return intel_context_create(siblings[0]); GEM_BUG_ON(!siblings[0]->cops->create_virtual); - return siblings[0]->cops->create_virtual(siblings, count); + return siblings[0]->cops->create_virtual(siblings, count, flags); } struct i915_request * diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index f278744fd4d0..87ce28320098 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -195,7 +195,8 @@ static struct virtual_engine *to_virtual_engine(struct intel_engine_cs *engine) } static struct intel_context * -execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count); +execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count, + unsigned long flags); static struct i915_request * __active_request(const struct intel_timeline * const tl, @@ -3729,7 +3730,8 @@ static void virtual_submit_request(struct i915_request *rq) } static struct intel_context * -execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count) +execlists_create_virtual(struct intel_engine_cs **siblings, unsigned int count, + unsigned long flags) { struct virtual_engine *ve; unsigned int n; diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c index 59cf8afc6d6f..5c1b01032753 100644 --- a/drivers/gpu/drm/i915/gt/selftest_execlists.c +++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c @@ -3727,7 +3727,7 @@ static int nop_virtual_engine(struct intel_gt *gt, GEM_BUG_ON(!nctx || nctx > ARRAY_SIZE(ve)); for (n = 0; n < nctx; n++) { - ve[n] = intel_engine_create_virtual(siblings, nsibling); + ve[n] = intel_engine_create_virtual(siblings, nsibling, 0); if (IS_ERR(ve[n])) { err = PTR_ERR(ve[n]); nctx = n; @@ -3923,7 +3923,7 @@ static int mask_virtual_engine(struct intel_gt *gt, * restrict it to our desired engine within the virtual engine. */ - ve = intel_engine_create_virtual(siblings, nsibling); + ve = intel_engine_create_virtual(siblings, nsibling, 0); if (IS_ERR(ve)) { err = PTR_ERR(ve); goto out_close; @@ -4054,7 +4054,7 @@ static int slicein_virtual_engine(struct intel_gt *gt, i915_request_add(rq); } - ce = intel_engine_create_virtual(siblings, nsibling); + ce = intel_engine_create_virtual(siblings, nsibling, 0); if (IS_ERR(ce)) { err = PTR_ERR(ce); goto out; @@ -4106,7 +4106,7 @@ static int sliceout_virtual_engine(struct intel_gt *gt, /* XXX We do not handle oversubscription and fairness with normal rq */ for (n = 0; n < nsibling; n++) { - ce = intel_engine_create_virtual(siblings, nsibling); + ce = intel_engine_create_virtual(siblings, nsibling, 0); if (IS_ERR(ce)) { err = PTR_ERR(ce); goto out; @@ -4208,7 +4208,7 @@ static int preserved_virtual_engine(struct intel_gt *gt, if (err) goto out_scratch; - ve = intel_engine_create_virtual(siblings, nsibling); + ve = intel_engine_create_virtual(siblings, nsibling, 0); if (IS_ERR(ve)) { err = PTR_ERR(ve); goto out_scratch; @@ -4348,7 +4348,7 @@ static int reset_virtual_engine(struct intel_gt *gt, if (igt_spinner_init(&spin, gt)) return -ENOMEM; - ve = intel_engine_create_virtual(siblings, nsibling); + ve = intel_engine_create_virtual(siblings, nsibling, 0); if (IS_ERR(ve)) { err = PTR_ERR(ve); goto out_spin; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 81a5439de924..326cb4fc8a9d 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -82,7 +82,8 @@ */ static struct intel_context * -guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count); +guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count, + unsigned long flags); #define GUC_REQUEST_SIZE 64 /* bytes */ @@ -2494,8 +2495,6 @@ static void guc_context_post_unpin(struct intel_context *ce) __guc_context_post_unpin(ce); } -/* Future patches will use this function */ -__attribute__ ((unused)) static int guc_parent_context_pre_pin(struct intel_context *ce, struct i915_gem_ww_ctx *ww) { @@ -2539,8 +2538,6 @@ static int guc_parent_context_pre_pin(struct intel_context *ce, return err; } -/* Future patches will use this function */ -__attribute__ ((unused)) static void guc_parent_context_post_unpin(struct intel_context *ce) { struct intel_context *child; @@ -2556,8 +2553,6 @@ static void guc_parent_context_post_unpin(struct intel_context *ce) } } -/* Future patches will use this function */ -__attribute__ ((unused)) static int guc_parent_context_pin(struct intel_context *ce) { int ret, i = 0, j = 0; @@ -2603,8 +2598,6 @@ static int guc_parent_context_pin(struct intel_context *ce) return ret; } -/* Future patches will use this function */ -__attribute__ ((unused)) static void guc_parent_context_unpin(struct intel_context *ce) { struct intel_context *child; @@ -3017,8 +3010,6 @@ static void destroy_worker_func(struct work_struct *w) intel_gt_pm_unpark_work_add(gt, destroy_worker); } -/* Future patches will use this function */ -__attribute__ ((unused)) static void guc_child_context_destroy(struct kref *kref) { __guc_context_destroy(container_of(kref, struct intel_context, ref)); @@ -3236,6 +3227,11 @@ static void remove_from_context(struct i915_request *rq) i915_request_notify_execute_cb_imm(rq); } +static struct intel_context * +guc_create_parallel(struct intel_engine_cs **engines, + unsigned int num_siblings, + unsigned int width); + static const struct intel_context_ops guc_context_ops = { .alloc = guc_context_alloc, @@ -3257,6 +3253,7 @@ static const struct intel_context_ops guc_context_ops = { .destroy = guc_context_destroy, .create_virtual = guc_create_virtual, + .create_parallel = guc_create_parallel, }; static void __guc_signal_context_fence(struct intel_context *ce) @@ -3745,6 +3742,91 @@ static void guc_retire_inflight_request_prio(struct i915_request *rq) spin_unlock(&ce->guc_active.lock); } +static const struct intel_context_ops virtual_parent_context_ops = { + .alloc = guc_virtual_context_alloc, + + .pre_pin = guc_parent_context_pre_pin, + .pin = guc_parent_context_pin, + .unpin = guc_parent_context_unpin, + .post_unpin = guc_parent_context_post_unpin, + + .ban = guc_context_ban, + + .enter = guc_virtual_context_enter, + .exit = guc_virtual_context_exit, + + .sched_disable = guc_context_sched_disable, + + .destroy = guc_context_destroy, + + .get_sibling = guc_virtual_get_sibling, +}; + +static const struct intel_context_ops virtual_child_context_ops = { + .alloc = guc_virtual_context_alloc, + + .enter = guc_virtual_context_enter, + .exit = guc_virtual_context_exit, + + .destroy = guc_child_context_destroy, +}; + +static struct intel_context * +guc_create_parallel(struct intel_engine_cs **engines, + unsigned int num_siblings, + unsigned int width) +{ + struct intel_engine_cs **siblings = NULL; + struct intel_context *parent = NULL, *ce, *err; + int i, j; + int ret; + + siblings = kmalloc_array(num_siblings, + sizeof(*siblings), + GFP_KERNEL); + if (!siblings) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < width; ++i) { + for (j = 0; j < num_siblings; ++j) + siblings[j] = engines[i * num_siblings + j]; + + ce = intel_engine_create_virtual(siblings, num_siblings, + FORCE_VIRTUAL); + if (!ce) { + err = ERR_PTR(-ENOMEM); + goto unwind; + } + + if (i == 0) { + parent = ce; + } else { + intel_context_bind_parent_child(parent, ce); + ret = intel_context_alloc_state(ce); + if (ret) { + err = ERR_PTR(ret); + goto unwind; + } + } + } + + parent->ops = &virtual_parent_context_ops; + for_each_child(parent, ce) + ce->ops = &virtual_child_context_ops; + + kfree(siblings); + return parent; + +unwind: + if (parent) { + for_each_child(parent, ce) + intel_context_put(ce); + intel_context_put(parent); + } + kfree(siblings); + return err; +} + static void sanitize_hwsp(struct intel_engine_cs *engine) { struct intel_timeline *tl; @@ -4540,7 +4622,8 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc, } static struct intel_context * -guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) +guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count, + unsigned long flags) { struct guc_virtual_engine *ve; struct intel_guc *guc; @@ -4553,7 +4636,9 @@ guc_create_virtual(struct intel_engine_cs **siblings, unsigned int count) return ERR_PTR(-ENOMEM); guc = &siblings[0]->gt->uc.guc; - sched_engine = guc_to_sched_engine(guc, GUC_SUBMIT_ENGINE_SINGLE_LRC); + sched_engine = guc_to_sched_engine(guc, (flags & FORCE_VIRTUAL) ? + GUC_SUBMIT_ENGINE_MULTI_LRC : + GUC_SUBMIT_ENGINE_SINGLE_LRC); ve->base.i915 = siblings[0]->i915; ve->base.gt = siblings[0]->gt; diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index b617f98598f8..162a4f340d3b 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -1777,6 +1777,7 @@ struct drm_i915_gem_context_param { * Extensions: * i915_context_engines_load_balance (I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE) * i915_context_engines_bond (I915_CONTEXT_ENGINES_EXT_BOND) + * i915_context_engines_parallel_submit (I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT) */ #define I915_CONTEXT_PARAM_ENGINES 0xa @@ -2002,6 +2003,122 @@ struct i915_context_engines_bond { struct i915_engine_class_instance engines[N__]; \ } __attribute__((packed)) name__ +/** + * struct i915_context_engines_parallel_submit - Configure engine for + * parallel submission. + * + * Setup a slot in the context engine map to allow multiple BBs to be submitted + * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the GPU + * in parallel. Multiple hardware contexts are created internally in the i915 + * run these BBs. Once a slot is configured for N BBs only N BBs can be + * submitted in each execbuf IOCTL and this is implicit behavior e.g. The user + * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows how + * many BBs there are based on the slot's configuration. The N BBs are the last + * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set. + * + * The default placement behavior is to create implicit bonds between each + * context if each context maps to more than 1 physical engine (e.g. context is + * a virtual engine). Also we only allow contexts of same engine class and these + * contexts must be in logically contiguous order. Examples of the placement + * behavior described below. Lastly, the default is to not allow BBs to + * preempted mid BB rather insert coordinated preemption on all hardware + * contexts between each set of BBs. Flags may be added in the future to change + * both of these default behaviors. + * + * Returns -EINVAL if hardware context placement configuration is invalid or if + * the placement configuration isn't supported on the platform / submission + * interface. + * Returns -ENODEV if extension isn't supported on the platform / submission + * interface. + * + * .. code-block:: none + * + * Example 1 pseudo code: + * CS[X] = generic engine of same class, logical instance X + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE + * set_engines(INVALID) + * set_parallel(engine_index=0, width=2, num_siblings=1, + * engines=CS[0],CS[1]) + * + * Results in the following valid placement: + * CS[0], CS[1] + * + * Example 2 pseudo code: + * CS[X] = generic engine of same class, logical instance X + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE + * set_engines(INVALID) + * set_parallel(engine_index=0, width=2, num_siblings=2, + * engines=CS[0],CS[2],CS[1],CS[3]) + * + * Results in the following valid placements: + * CS[0], CS[1] + * CS[2], CS[3] + * + * This can also be thought of as 2 virtual engines described by 2-D array + * in the engines the field with bonds placed between each index of the + * virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to + * CS[3]. + * VE[0] = CS[0], CS[2] + * VE[1] = CS[1], CS[3] + * + * Example 3 pseudo code: + * CS[X] = generic engine of same class, logical instance X + * INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE + * set_engines(INVALID) + * set_parallel(engine_index=0, width=2, num_siblings=2, + * engines=CS[0],CS[1],CS[1],CS[3]) + * + * Results in the following valid and invalid placements: + * CS[0], CS[1] + * CS[1], CS[3] - Not logical contiguous, return -EINVAL + */ +struct i915_context_engines_parallel_submit { + /** + * @base: base user extension. + */ + struct i915_user_extension base; + + /** + * @engine_index: slot for parallel engine + */ + __u16 engine_index; + + /** + * @width: number of contexts per parallel engine + */ + __u16 width; + + /** + * @num_siblings: number of siblings per context + */ + __u16 num_siblings; + + /** + * @mbz16: reserved for future use; must be zero + */ + __u16 mbz16; + + /** + * @flags: all undefined flags must be zero, currently not defined flags + */ + __u64 flags; + + /** + * @mbz64: reserved for future use; must be zero + */ + __u64 mbz64[3]; + + /** + * @engines: 2-d array of engine instances to configure parallel engine + * + * length = width (i) * num_siblings (j) + * index = j + i * num_siblings + */ + struct i915_engine_class_instance engines[0]; + +} __packed; + + /** * DOC: Context Engine Map uAPI * @@ -2061,6 +2178,7 @@ struct i915_context_param_engines { __u64 extensions; /* linked chain of extension blocks, 0 terminates */ #define I915_CONTEXT_ENGINES_EXT_LOAD_BALANCE 0 /* see i915_context_engines_load_balance */ #define I915_CONTEXT_ENGINES_EXT_BOND 1 /* see i915_context_engines_bond */ +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see i915_context_engines_parallel_submit */ struct i915_engine_class_instance engines[0]; } __attribute__((packed)); From patchwork Tue Jul 20 20:57:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AECF1C636C8 for ; Tue, 20 Jul 2021 20:41:46 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8367E60FF3 for ; Tue, 20 Jul 2021 20:41:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8367E60FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 307D26E5CE; Tue, 20 Jul 2021 20:40:57 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 378D56E50C; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885376" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885376" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906087" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 28/42] drm/i915/guc: Add basic GuC multi-lrc selftest Date: Tue, 20 Jul 2021 13:57:48 -0700 Message-Id: <20210720205802.39610-29-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add very basic (single submission) multi-lrc selftest. Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 1 + .../drm/i915/gt/uc/selftest_guc_multi_lrc.c | 168 ++++++++++++++++++ .../drm/i915/selftests/i915_live_selftests.h | 1 + 3 files changed, 170 insertions(+) create mode 100644 drivers/gpu/drm/i915/gt/uc/selftest_guc_multi_lrc.c diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 326cb4fc8a9d..dde6aab1db85 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -4737,4 +4737,5 @@ bool intel_guc_virtual_engine_has_heartbeat(const struct intel_engine_cs *ve) #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_guc_flow_control.c" +#include "selftest_guc_multi_lrc.c" #endif diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc_multi_lrc.c b/drivers/gpu/drm/i915/gt/uc/selftest_guc_multi_lrc.c new file mode 100644 index 000000000000..226e64fee919 --- /dev/null +++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc_multi_lrc.c @@ -0,0 +1,168 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright �� 2019 Intel Corporation + */ + +#include "selftests/igt_spinner.h" +#include "selftests/igt_reset.h" +#include "selftests/intel_scheduler_helpers.h" +#include "gt/intel_engine_heartbeat.h" +#include "gem/selftests/mock_context.h" + +static void logical_sort(struct intel_engine_cs **engines, int num_engines) +{ + struct intel_engine_cs *sorted[MAX_ENGINE_INSTANCE + 1]; + int i, j; + + for (i = 0; i < num_engines; ++i) + for (j = 0; j < MAX_ENGINE_INSTANCE + 1; ++j) { + if (engines[j]->logical_mask & BIT(i)) { + sorted[i] = engines[j]; + break; + } + } + + memcpy(*engines, *sorted, + sizeof(struct intel_engine_cs *) * num_engines); +} + +static struct intel_context * +multi_lrc_create_parent(struct intel_gt *gt, u8 class, + unsigned long flags) +{ + struct intel_engine_cs *siblings[MAX_ENGINE_INSTANCE + 1]; + struct intel_engine_cs *engine; + enum intel_engine_id id; + int i = 0; + + for_each_engine(engine, gt, id) { + if (engine->class != class) + continue; + + siblings[i++] = engine; + } + + if (i <= 1) + return ERR_PTR(0); + + logical_sort(siblings, i); + + return intel_engine_create_parallel(siblings, 1, i); +} + +static void multi_lrc_context_put(struct intel_context *ce) +{ + GEM_BUG_ON(!intel_context_is_parent(ce)); + + /* + * Only the parent gets the creation ref put in the uAPI, the parent + * itself is responsible for creation ref put on the children. + */ + intel_context_put(ce); +} + +static struct i915_request * +multi_lrc_nop_request(struct intel_context *ce) +{ + struct intel_context *child; + struct i915_request *rq, *child_rq; + int i = 0; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + rq = intel_context_create_request(ce); + if (IS_ERR(rq)) + return rq; + + i915_request_get(rq); + i915_request_add(rq); + + for_each_child(ce, child) { + child_rq = intel_context_create_request(child); + if (IS_ERR(child_rq)) + goto child_error; + + if (++i == ce->guc_number_children) + set_bit(I915_FENCE_FLAG_SUBMIT_PARALLEL, + &child_rq->fence.flags); + i915_request_add(child_rq); + } + + return rq; + +child_error: + i915_request_put(rq); + + return ERR_PTR(-ENOMEM); +} + +static int __intel_guc_multi_lrc_basic(struct intel_gt *gt, unsigned int class) +{ + struct intel_context *parent; + struct i915_request *rq; + int ret; + + parent = multi_lrc_create_parent(gt, class, 0); + if (IS_ERR(parent)) { + pr_err("Failed creating contexts: %ld", PTR_ERR(parent)); + return PTR_ERR(parent); + } else if (parent == NULL) { + pr_debug("Not enough engines in class: %d", + VIDEO_DECODE_CLASS); + return 0; + } + + rq = multi_lrc_nop_request(parent); + if (IS_ERR(rq)) { + ret = PTR_ERR(rq); + pr_err("Failed creating requests: %d", ret); + goto out; + } + + ret = intel_selftest_wait_for_rq(rq); + if (ret) + pr_err("Failed waiting on request: %d", ret); + + i915_request_put(rq); + + if (ret >= 0) { + ret = intel_gt_wait_for_idle(gt, HZ * 5); + if (ret < 0) + pr_err("GT failed to idle: %d\n", ret); + } + +out: + multi_lrc_context_put(parent); + return ret; +} + +static int intel_guc_multi_lrc_basic(void *arg) +{ + struct intel_gt *gt = arg; + unsigned class; + int ret; + + for (class = 0; class < MAX_ENGINE_CLASS + 1; ++class) { + ret = __intel_guc_multi_lrc_basic(gt, class); + if (ret) + return ret; + } + + return 0; +} + +int intel_guc_multi_lrc(struct drm_i915_private *i915) +{ + static const struct i915_subtest tests[] = { + SUBTEST(intel_guc_multi_lrc_basic), + }; + struct intel_gt *gt = &i915->gt; + + if (intel_gt_is_wedged(gt)) + return 0; + + if (!intel_uc_uses_guc_submission(>->uc)) + return 0; + + return intel_gt_live_subtests(tests, gt); +} diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h index d9bd732b741a..2ddb72bbab69 100644 --- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h +++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h @@ -48,5 +48,6 @@ selftest(execlists, intel_execlists_live_selftests) selftest(ring_submission, intel_ring_submission_live_selftests) selftest(perf, i915_perf_live_selftests) selftest(guc_flow_control, intel_guc_flow_control) +selftest(guc_multi_lrc, intel_guc_multi_lrc) /* Here be dragons: keep last to run last! */ selftest(late_gt_pm, intel_gt_pm_late_selftests) From patchwork Tue Jul 20 20:57:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 260B0C636C8 for ; Tue, 20 Jul 2021 20:41:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E5FAB60FF3 for ; Tue, 20 Jul 2021 20:41:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5FAB60FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 337576E581; Tue, 20 Jul 2021 20:40:35 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 35FA66E4E3; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421888" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421888" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906088" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 29/42] drm/i915/guc: Implement BB boundary preemption for multi-lrc Date: Tue, 20 Jul 2021 13:57:49 -0700 Message-Id: <20210720205802.39610-30-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For some users of multi-lrc, e.g. split frame, it isn't safe to preempt mid BB. To safely enable preemption at the BB boundary, a handshake between to parent and child is needed. This is implemented via custom emit_bb_start & emit_fini_breadcrumb functions and enabled via by default if a context is configured by set parallel extension. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gt/intel_context.c | 2 +- drivers/gpu/drm/i915/gt/intel_context_types.h | 3 + drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h | 2 +- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 285 +++++++++++++++++- 4 files changed, 289 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 7395894724f8..35f5d28c7465 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -639,7 +639,7 @@ void intel_context_bind_parent_child(struct intel_context *parent, GEM_BUG_ON(intel_context_is_child(child)); GEM_BUG_ON(intel_context_is_parent(child)); - parent->guc_number_children++; + child->guc_child_index = parent->guc_number_children++; list_add_tail(&child->guc_child_link, &parent->guc_child_list); child->parent = parent; diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 5912eb291bad..4886e107c1f8 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -222,6 +222,9 @@ struct intel_context { /* Number of children if parent */ u8 guc_number_children; + /* Child index if child */ + u8 guc_child_index; + u8 parent_page; /* if set, page num reserved for parent context */ /* diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h index 820cb6b5d2d0..11d64746dbf1 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h @@ -181,7 +181,7 @@ struct guc_process_desc { u32 wq_status; u32 engine_presence; u32 priority; - u32 reserved[30]; + u32 reserved[36]; } __packed; #define CONTEXT_REGISTRATION_FLAG_KMD BIT(0) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index dde6aab1db85..b9c43ba823b4 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -11,6 +11,7 @@ #include "gt/intel_context.h" #include "gt/intel_engine_pm.h" #include "gt/intel_engine_heartbeat.h" +#include "gt/intel_gpu_commands.h" #include "gt/intel_gt.h" #include "gt/intel_gt_irq.h" #include "gt/intel_gt_pm.h" @@ -460,10 +461,14 @@ static inline struct i915_priolist *to_priolist(struct rb_node *rb) /* * When using multi-lrc submission an extra page in the context state is - * reserved for the process descriptor and work queue. + * reserved for the process descriptor, work queue, and preempt BB boundary + * handshake between the parent + childlren contexts. * * The layout of this page is below: * 0 guc_process_desc + * + sizeof(struct guc_process_desc) child go + * + CACHELINE_BYTES child join ... + * + CACHELINE_BYTES ... * ... unused * PAGE_SIZE / 2 work queue start * ... work queue @@ -2166,6 +2171,30 @@ static int deregister_context(struct intel_context *ce, u32 guc_id, bool loop) return __guc_action_deregister_context(guc, guc_id, loop); } +static inline void clear_children_join_go_memory(struct intel_context *ce) +{ + u32 *mem = (u32 *)(__get_process_desc(ce) + 1); + u8 i; + + for (i = 0; i < ce->guc_number_children + 1; ++i) + mem[i * (CACHELINE_BYTES / sizeof(u32))] = 0; +} + +static inline u32 get_children_go_value(struct intel_context *ce) +{ + u32 *mem = (u32 *)(__get_process_desc(ce) + 1); + + return mem[0]; +} + +static inline u32 get_children_join_value(struct intel_context *ce, + u8 child_index) +{ + u32 *mem = (u32 *)(__get_process_desc(ce) + 1); + + return mem[(child_index + 1) * (CACHELINE_BYTES / sizeof(u32))]; +} + static void guc_context_policy_init(struct intel_engine_cs *engine, struct guc_lrc_desc *desc) { @@ -2361,6 +2390,8 @@ static int guc_lrc_desc_pin(struct intel_context *ce, bool loop) desc->context_flags = CONTEXT_REGISTRATION_FLAG_KMD; guc_context_policy_init(engine, desc); } + + clear_children_join_go_memory(ce); } /* @@ -3771,6 +3802,32 @@ static const struct intel_context_ops virtual_child_context_ops = { .destroy = guc_child_context_destroy, }; +/* + * The below override of the breadcrumbs is enabled when the user configures a + * context for parallel submission (multi-lrc, parent-child). + * + * The overridden breadcrumbs implements an algorithm which allows the GuC to + * safely preempt all the hw contexts configured for parallel submission + * between each BB. The contract between the i915 and GuC is if the parent + * context can be preempted, all the children can be preempted, and the GuC will + * always try to preempt the parent before the children. A handshake between the + * parent / children breadcrumbs ensures the i915 holds up its end of the deal + * creating a window to preempt between each set of BBs. + */ +static int emit_bb_start_parent_bb_preempt_boundary(struct i915_request *rq, + u64 offset, u32 len, + const unsigned int flags); +static int emit_bb_start_child_bb_preempt_boundary(struct i915_request *rq, + u64 offset, u32 len, + const unsigned int flags); +static u32 * +emit_fini_breadcrumb_parent_bb_preempt_boundary(struct i915_request *rq, + u32 *cs); +static u32 * +emit_fini_breadcrumb_child_bb_preempt_boundary(struct i915_request *rq, + u32 *cs); + + static struct intel_context * guc_create_parallel(struct intel_engine_cs **engines, unsigned int num_siblings, @@ -3814,6 +3871,20 @@ guc_create_parallel(struct intel_engine_cs **engines, for_each_child(parent, ce) ce->ops = &virtual_child_context_ops; + parent->engine->emit_bb_start = + emit_bb_start_parent_bb_preempt_boundary; + parent->engine->emit_fini_breadcrumb = + emit_fini_breadcrumb_parent_bb_preempt_boundary; + parent->engine->emit_fini_breadcrumb_dw = + 12 + 4 * parent->guc_number_children; + for_each_child(parent, ce) { + ce->engine->emit_bb_start = + emit_bb_start_child_bb_preempt_boundary; + ce->engine->emit_fini_breadcrumb = + emit_fini_breadcrumb_child_bb_preempt_boundary; + ce->engine->emit_fini_breadcrumb_dw = 16; + } + kfree(siblings); return parent; @@ -4174,6 +4245,205 @@ void intel_guc_submission_init_early(struct intel_guc *guc) guc->submission_selected = __guc_submission_selected(guc); } +static inline u32 get_children_go_addr(struct intel_context *ce) +{ + GEM_BUG_ON(!intel_context_is_parent(ce)); + + return i915_ggtt_offset(ce->state) + + __get_process_desc_offset(ce) + + sizeof(struct guc_process_desc); +} + +static inline u32 get_children_join_addr(struct intel_context *ce, + u8 child_index) +{ + GEM_BUG_ON(!intel_context_is_parent(ce)); + + return get_children_go_addr(ce) + (child_index + 1) * CACHELINE_BYTES; +} + +#define PARENT_GO_BB 1 +#define PARENT_GO_FINI_BREADCRUMB 0 +#define CHILD_GO_BB 1 +#define CHILD_GO_FINI_BREADCRUMB 0 +static int emit_bb_start_parent_bb_preempt_boundary(struct i915_request *rq, + u64 offset, u32 len, + const unsigned int flags) +{ + struct intel_context *ce = rq->context; + u32 *cs; + u8 i; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + cs = intel_ring_begin(rq, 10 + 4 * ce->guc_number_children); + if (IS_ERR(cs)) + return PTR_ERR(cs); + + /* Wait on chidlren */ + for (i = 0; i < ce->guc_number_children; ++i) { + *cs++ = (MI_SEMAPHORE_WAIT | + MI_SEMAPHORE_GLOBAL_GTT | + MI_SEMAPHORE_POLL | + MI_SEMAPHORE_SAD_EQ_SDD); + *cs++ = PARENT_GO_BB; + *cs++ = get_children_join_addr(ce, i); + *cs++ = 0; + } + + /* Turn off preemption */ + *cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE; + *cs++ = MI_NOOP; + + /* Tell children go */ + cs = gen8_emit_ggtt_write(cs, + CHILD_GO_BB, + get_children_go_addr(ce), + 0); + + /* Jump to batch */ + *cs++ = MI_BATCH_BUFFER_START_GEN8 | + (flags & I915_DISPATCH_SECURE ? 0 : BIT(8)); + *cs++ = lower_32_bits(offset); + *cs++ = upper_32_bits(offset); + *cs++ = MI_NOOP; + + intel_ring_advance(rq, cs); + + return 0; +} + +static int emit_bb_start_child_bb_preempt_boundary(struct i915_request *rq, + u64 offset, u32 len, + const unsigned int flags) +{ + struct intel_context *ce = rq->context; + u32 *cs; + + GEM_BUG_ON(!intel_context_is_child(ce)); + + cs = intel_ring_begin(rq, 12); + if (IS_ERR(cs)) + return PTR_ERR(cs); + + /* Signal parent */ + cs = gen8_emit_ggtt_write(cs, + PARENT_GO_BB, + get_children_join_addr(ce->parent, + ce->guc_child_index), + 0); + + /* Wait parent on for go */ + *cs++ = (MI_SEMAPHORE_WAIT | + MI_SEMAPHORE_GLOBAL_GTT | + MI_SEMAPHORE_POLL | + MI_SEMAPHORE_SAD_EQ_SDD); + *cs++ = CHILD_GO_BB; + *cs++ = get_children_go_addr(ce->parent); + *cs++ = 0; + + /* Turn off preemption */ + *cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE; + + /* Jump to batch */ + *cs++ = MI_BATCH_BUFFER_START_GEN8 | + (flags & I915_DISPATCH_SECURE ? 0 : BIT(8)); + *cs++ = lower_32_bits(offset); + *cs++ = upper_32_bits(offset); + + intel_ring_advance(rq, cs); + + return 0; +} + +static u32 * +emit_fini_breadcrumb_parent_bb_preempt_boundary(struct i915_request *rq, + u32 *cs) +{ + struct intel_context *ce = rq->context; + u8 i; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + /* Wait on children */ + for (i = 0; i < ce->guc_number_children; ++i) { + *cs++ = (MI_SEMAPHORE_WAIT | + MI_SEMAPHORE_GLOBAL_GTT | + MI_SEMAPHORE_POLL | + MI_SEMAPHORE_SAD_EQ_SDD); + *cs++ = PARENT_GO_FINI_BREADCRUMB; + *cs++ = get_children_join_addr(ce, i); + *cs++ = 0; + } + + /* Turn on preemption */ + *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE; + *cs++ = MI_NOOP; + + /* Tell children go */ + cs = gen8_emit_ggtt_write(cs, + CHILD_GO_FINI_BREADCRUMB, + get_children_go_addr(ce), + 0); + + /* Emit fini breadcrumb */ + cs = gen8_emit_ggtt_write(cs, + rq->fence.seqno, + i915_request_active_timeline(rq)->hwsp_offset, + 0); + + /* User interrupt */ + *cs++ = MI_USER_INTERRUPT; + *cs++ = MI_NOOP; + + rq->tail = intel_ring_offset(rq, cs); + + return cs; +} + +static u32 * +emit_fini_breadcrumb_child_bb_preempt_boundary(struct i915_request *rq, + u32 *cs) +{ + struct intel_context *ce = rq->context; + + GEM_BUG_ON(!intel_context_is_child(ce)); + + /* Turn on preemption */ + *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE; + *cs++ = MI_NOOP; + + /* Signal parent */ + cs = gen8_emit_ggtt_write(cs, + PARENT_GO_FINI_BREADCRUMB, + get_children_join_addr(ce->parent, + ce->guc_child_index), + 0); + + /* Wait parent on for go */ + *cs++ = (MI_SEMAPHORE_WAIT | + MI_SEMAPHORE_GLOBAL_GTT | + MI_SEMAPHORE_POLL | + MI_SEMAPHORE_SAD_EQ_SDD); + *cs++ = CHILD_GO_FINI_BREADCRUMB; + *cs++ = get_children_go_addr(ce->parent); + *cs++ = 0; + + /* Emit fini breadcrumb */ + cs = gen8_emit_ggtt_write(cs, + rq->fence.seqno, + i915_request_active_timeline(rq)->hwsp_offset, + 0); + + /* User interrupt */ + *cs++ = MI_USER_INTERRUPT; + *cs++ = MI_NOOP; + + rq->tail = intel_ring_offset(rq, cs); + + return cs; +} + static inline struct intel_context * g2h_context_lookup(struct intel_guc *guc, u32 desc_idx) { @@ -4615,6 +4885,19 @@ void intel_guc_submission_print_context_info(struct intel_guc *guc, drm_printf(p, "\t\tWQI Status: %u\n\n", READ_ONCE(desc->wq_status)); + drm_printf(p, "\t\tNumber Children: %u\n\n", + ce->guc_number_children); + if (ce->engine->emit_bb_start == + emit_bb_start_parent_bb_preempt_boundary) { + u8 i; + + drm_printf(p, "\t\tChildren Go: %u\n\n", + get_children_go_value(ce)); + for (i = 0; i < ce->guc_number_children; ++i) + drm_printf(p, "\t\tChildren Join: %u\n", + get_children_join_value(ce, i)); + } + for_each_child(ce, child) guc_log_context(p, child); } From patchwork Tue Jul 20 20:57:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27326C636C8 for ; Tue, 20 Jul 2021 20:41:53 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F080860FF3 for ; Tue, 20 Jul 2021 20:41:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F080860FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C49C96E50D; Tue, 20 Jul 2021 20:41:03 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id AFE556E530; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909014" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909014" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906089" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 30/42] i915/drm: Move secure execbuf check to execbuf2 Date: Tue, 20 Jul 2021 13:57:50 -0700 Message-Id: <20210720205802.39610-31-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Goal is to remove all input sanity checks from the core submission. Signed-off-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 35 +++++++++++-------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 1ed7475de454..70d352fc543f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3184,19 +3184,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, eb.num_fences = 0; eb.batch_flags = 0; - if (args->flags & I915_EXEC_SECURE) { - if (GRAPHICS_VER(i915) >= 11) - return -ENODEV; - - /* Return -EPERM to trigger fallback code on old binaries. */ - if (!HAS_SECURE_BATCHES(i915)) - return -EPERM; - - if (!drm_is_current_master(file) || !capable(CAP_SYS_ADMIN)) - return -EPERM; - + if (args->flags & I915_EXEC_SECURE) eb.batch_flags |= I915_DISPATCH_SECURE; - } if (args->flags & I915_EXEC_IS_PINNED) eb.batch_flags |= I915_DISPATCH_PINNED; @@ -3414,6 +3403,18 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, return -EINVAL; } + if (args->flags & I915_EXEC_SECURE) { + if (GRAPHICS_VER(i915) >= 11) + return -ENODEV; + + /* Return -EPERM to trigger fallback code on old binaries. */ + if (!HAS_SECURE_BATCHES(i915)) + return -EPERM; + + if (!drm_is_current_master(file) || !capable(CAP_SYS_ADMIN)) + return -EPERM; + } + err = i915_gem_check_execbuffer(args); if (err) return err; @@ -3430,8 +3431,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, u64_to_user_ptr(args->buffers_ptr), sizeof(*exec2_list) * count)) { drm_dbg(&i915->drm, "copy %zd exec entries failed\n", count); - kvfree(exec2_list); - return -EFAULT; + err = -EFAULT; + goto err_copy; } err = i915_gem_do_execbuffer(dev, file, args, exec2_list); @@ -3476,6 +3477,12 @@ end:; args->flags &= ~__I915_EXEC_UNKNOWN_FLAGS; kvfree(exec2_list); + + return err; + +err_copy: + kvfree(exec2_list); + return err; } From patchwork Tue Jul 20 20:57:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86DFAC07E95 for ; Tue, 20 Jul 2021 20:41:42 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5BEBD60FF3 for ; Tue, 20 Jul 2021 20:41:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5BEBD60FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 52DA86E5CF; Tue, 20 Jul 2021 20:40:57 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4D4416E525; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885377" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885377" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906090" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 31/42] drm/i915: Move input/exec fence handling to i915_gem_execbuffer2 Date: Tue, 20 Jul 2021 13:57:51 -0700 Message-Id: <20210720205802.39610-32-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move the job of creating an input/exec fences (from a file descriptor) out of i915_gem_do_execbuffer. Signed-off-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 75 +++++++++++-------- 1 file changed, 43 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 70d352fc543f..0416bcb551b0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3146,11 +3146,12 @@ static int i915_gem_do_execbuffer(struct drm_device *dev, struct drm_file *file, struct drm_i915_gem_execbuffer2 *args, - struct drm_i915_gem_exec_object2 *exec) + struct drm_i915_gem_exec_object2 *exec, + struct dma_fence *in_fence, + struct dma_fence *exec_fence) { struct drm_i915_private *i915 = to_i915(dev); struct i915_execbuffer eb; - struct dma_fence *in_fence = NULL; struct sync_file *out_fence = NULL; struct i915_vma *batch; int out_fence_fd = -1; @@ -3197,25 +3198,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (err) goto err_ext; -#define IN_FENCES (I915_EXEC_FENCE_IN | I915_EXEC_FENCE_SUBMIT) - if (args->flags & IN_FENCES) { - if ((args->flags & IN_FENCES) == IN_FENCES) - return -EINVAL; - - in_fence = sync_file_get_fence(lower_32_bits(args->rsvd2)); - if (!in_fence) { - err = -EINVAL; - goto err_ext; - } - } -#undef IN_FENCES - if (args->flags & I915_EXEC_FENCE_OUT) { out_fence_fd = get_unused_fd_flags(O_CLOEXEC); - if (out_fence_fd < 0) { - err = out_fence_fd; - goto err_in_fence; - } + if (out_fence_fd < 0) + goto err_ext; } err = eb_create(&eb); @@ -3277,13 +3263,16 @@ i915_gem_do_execbuffer(struct drm_device *dev, goto err_ext; } + if (exec_fence) { + err = i915_request_await_execution(eb.request, + exec_fence); + if (err < 0) + goto err_request; + } + if (in_fence) { - if (args->flags & I915_EXEC_FENCE_SUBMIT) - err = i915_request_await_execution(eb.request, - in_fence); - else - err = i915_request_await_dma_fence(eb.request, - in_fence); + err = i915_request_await_dma_fence(eb.request, + in_fence); if (err < 0) goto err_request; } @@ -3363,8 +3352,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, err_out_fence: if (out_fence_fd != -1) put_unused_fd(out_fence_fd); -err_in_fence: - dma_fence_put(in_fence); err_ext: put_fence_array(eb.fences, eb.num_fences); return err; @@ -3395,6 +3382,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_execbuffer2 *args = data; struct drm_i915_gem_exec_object2 *exec2_list; + struct dma_fence *in_fence = NULL; + struct dma_fence *exec_fence = NULL; const size_t count = args->buffer_count; int err; @@ -3419,13 +3408,33 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, if (err) return err; + if (args->flags & I915_EXEC_FENCE_IN) { + in_fence = sync_file_get_fence(lower_32_bits(args->rsvd2)); + if (!in_fence) + return -EINVAL; + } + + if (args->flags & I915_EXEC_FENCE_SUBMIT) { + if (in_fence) { + err = -EINVAL; + goto err_exec_fence; + } + + exec_fence = sync_file_get_fence(lower_32_bits(args->rsvd2)); + if (!exec_fence) { + err = -EINVAL; + goto err_exec_fence; + } + } + /* Allocate extra slots for use by the command parser */ exec2_list = kvmalloc_array(count + 2, eb_element_size(), __GFP_NOWARN | GFP_KERNEL); if (exec2_list == NULL) { drm_dbg(&i915->drm, "Failed to allocate exec list for %zd buffers\n", count); - return -ENOMEM; + err = -ENOMEM; + goto err_alloc; } if (copy_from_user(exec2_list, u64_to_user_ptr(args->buffers_ptr), @@ -3435,7 +3444,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, goto err_copy; } - err = i915_gem_do_execbuffer(dev, file, args, exec2_list); + err = i915_gem_do_execbuffer(dev, file, args, exec2_list, in_fence, + exec_fence); /* * Now that we have begun execution of the batchbuffer, we ignore @@ -3476,12 +3486,13 @@ end:; } args->flags &= ~__I915_EXEC_UNKNOWN_FLAGS; - kvfree(exec2_list); - - return err; err_copy: kvfree(exec2_list); +err_alloc: + dma_fence_put(exec_fence); +err_exec_fence: + dma_fence_put(in_fence); return err; } From patchwork Tue Jul 20 20:57:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED215C07E9B for ; Tue, 20 Jul 2021 20:41:43 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2D1460FF3 for ; Tue, 20 Jul 2021 20:41:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2D1460FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E7F216E5C6; Tue, 20 Jul 2021 20:40:56 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 658C26E529; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421889" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421889" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906091" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 32/42] drm/i915: Move output fence handling to i915_gem_execbuffer2 Date: Tue, 20 Jul 2021 13:57:52 -0700 Message-Id: <20210720205802.39610-33-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move the job of creating a new file descriptor and passing it back to userspace to i915_gem_execbuffer2. Signed-off-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 45 ++++++++++--------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 0416bcb551b0..66f1819fcebc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3148,13 +3148,13 @@ i915_gem_do_execbuffer(struct drm_device *dev, struct drm_i915_gem_execbuffer2 *args, struct drm_i915_gem_exec_object2 *exec, struct dma_fence *in_fence, - struct dma_fence *exec_fence) + struct dma_fence *exec_fence, + int out_fence_fd) { struct drm_i915_private *i915 = to_i915(dev); struct i915_execbuffer eb; struct sync_file *out_fence = NULL; struct i915_vma *batch; - int out_fence_fd = -1; int err; BUILD_BUG_ON(__EXEC_INTERNAL_FLAGS & ~__I915_EXEC_ILLEGAL_FLAGS); @@ -3198,15 +3198,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (err) goto err_ext; - if (args->flags & I915_EXEC_FENCE_OUT) { - out_fence_fd = get_unused_fd_flags(O_CLOEXEC); - if (out_fence_fd < 0) - goto err_ext; - } - err = eb_create(&eb); if (err) - goto err_out_fence; + goto err_ext; GEM_BUG_ON(!eb.lut_size); @@ -3283,7 +3277,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, goto err_request; } - if (out_fence_fd != -1) { + if (out_fence_fd >= 0) { out_fence = sync_file_create(&eb.request->fence); if (!out_fence) { err = -ENOMEM; @@ -3313,14 +3307,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, signal_fence_array(&eb); if (out_fence) { - if (err == 0) { + if (err == 0) fd_install(out_fence_fd, out_fence->file); - args->rsvd2 &= GENMASK_ULL(31, 0); /* keep in-fence */ - args->rsvd2 |= (u64)out_fence_fd << 32; - out_fence_fd = -1; - } else { + else fput(out_fence->file); - } } if (unlikely(eb.gem_context->syncobj)) { @@ -3349,9 +3339,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, i915_gem_context_put(eb.gem_context); err_destroy: eb_destroy(&eb); -err_out_fence: - if (out_fence_fd != -1) - put_unused_fd(out_fence_fd); err_ext: put_fence_array(eb.fences, eb.num_fences); return err; @@ -3384,6 +3371,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_i915_gem_exec_object2 *exec2_list; struct dma_fence *in_fence = NULL; struct dma_fence *exec_fence = NULL; + int out_fence_fd = -1; const size_t count = args->buffer_count; int err; @@ -3427,6 +3415,14 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, } } + if (args->flags & I915_EXEC_FENCE_OUT) { + out_fence_fd = get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + err = out_fence_fd; + goto err_out_fence; + } + } + /* Allocate extra slots for use by the command parser */ exec2_list = kvmalloc_array(count + 2, eb_element_size(), __GFP_NOWARN | GFP_KERNEL); @@ -3445,7 +3441,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, } err = i915_gem_do_execbuffer(dev, file, args, exec2_list, in_fence, - exec_fence); + exec_fence, out_fence_fd); /* * Now that we have begun execution of the batchbuffer, we ignore @@ -3485,11 +3481,20 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, end:; } + if (!err && out_fence_fd >= 0) { + args->rsvd2 &= GENMASK_ULL(31, 0); /* keep in-fence */ + args->rsvd2 |= (u64)out_fence_fd << 32; + out_fence_fd = -1; + } + args->flags &= ~__I915_EXEC_UNKNOWN_FLAGS; err_copy: kvfree(exec2_list); err_alloc: + if (out_fence_fd >= 0) + put_unused_fd(out_fence_fd); +err_out_fence: dma_fence_put(exec_fence); err_exec_fence: dma_fence_put(in_fence); From patchwork Tue Jul 20 20:57:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52869C07E95 for ; Tue, 20 Jul 2021 20:41:35 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2751660FEA for ; Tue, 20 Jul 2021 20:41:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2751660FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 61A6B6E5B6; Tue, 20 Jul 2021 20:40:49 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id D49506E512; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909015" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909015" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906093" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 33/42] drm/i915: Return output fence from i915_gem_do_execbuffer Date: Tue, 20 Jul 2021 13:57:53 -0700 Message-Id: <20210720205802.39610-34-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move the job of creating a new sync fence and installing it onto a file descriptor to i915_gem_execbuffer2. Suggested-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 39 +++++++++---------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 66f1819fcebc..40311583f03d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3149,11 +3149,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, struct drm_i915_gem_exec_object2 *exec, struct dma_fence *in_fence, struct dma_fence *exec_fence, - int out_fence_fd) + struct dma_fence **out_fence) { struct drm_i915_private *i915 = to_i915(dev); struct i915_execbuffer eb; - struct sync_file *out_fence = NULL; struct i915_vma *batch; int err; @@ -3277,14 +3276,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, goto err_request; } - if (out_fence_fd >= 0) { - out_fence = sync_file_create(&eb.request->fence); - if (!out_fence) { - err = -ENOMEM; - goto err_request; - } - } - /* * Whilst this request exists, batch_obj will be on the * active_list, and so will hold the active reference. Only when this @@ -3306,12 +3297,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (eb.fences) signal_fence_array(&eb); - if (out_fence) { - if (err == 0) - fd_install(out_fence_fd, out_fence->file); - else - fput(out_fence->file); - } + if (!err && out_fence) + *out_fence = dma_fence_get(&eb.request->fence); if (unlikely(eb.gem_context->syncobj)) { drm_syncobj_replace_fence(eb.gem_context->syncobj, @@ -3369,6 +3356,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_execbuffer2 *args = data; struct drm_i915_gem_exec_object2 *exec2_list; + struct dma_fence **out_fence_p = NULL; + struct dma_fence *out_fence = NULL; struct dma_fence *in_fence = NULL; struct dma_fence *exec_fence = NULL; int out_fence_fd = -1; @@ -3421,6 +3410,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, err = out_fence_fd; goto err_out_fence; } + out_fence_p = &out_fence; } /* Allocate extra slots for use by the command parser */ @@ -3441,7 +3431,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, } err = i915_gem_do_execbuffer(dev, file, args, exec2_list, in_fence, - exec_fence, out_fence_fd); + exec_fence, out_fence_p); /* * Now that we have begun execution of the batchbuffer, we ignore @@ -3482,9 +3472,18 @@ end:; } if (!err && out_fence_fd >= 0) { - args->rsvd2 &= GENMASK_ULL(31, 0); /* keep in-fence */ - args->rsvd2 |= (u64)out_fence_fd << 32; - out_fence_fd = -1; + struct sync_file *sync_fence; + + sync_fence = sync_file_create(out_fence); + if (sync_fence) { + fd_install(out_fence_fd, sync_fence->file); + args->rsvd2 &= GENMASK_ULL(31, 0); /* keep in-fence */ + args->rsvd2 |= (u64)out_fence_fd << 32; + out_fence_fd = -1; + } + dma_fence_put(out_fence); + } else if (out_fence) { + dma_fence_put(out_fence); } args->flags &= ~__I915_EXEC_UNKNOWN_FLAGS; From patchwork Tue Jul 20 20:57:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64933C636CA for ; Tue, 20 Jul 2021 20:41:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3B44F60FF3 for ; Tue, 20 Jul 2021 20:41:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B44F60FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 11B566E532; Tue, 20 Jul 2021 20:40:48 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 644966E527; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885378" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885378" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906094" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 34/42] drm/i915: Store batch index in struct i915_execbuffer Date: Tue, 20 Jul 2021 13:57:54 -0700 Message-Id: <20210720205802.39610-35-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This will help with upcoming extensions where more than 1 batch can be submitted in a single execbuf IOCTL. Signed-off-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 40311583f03d..1f1f477e46b4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -252,6 +252,9 @@ struct i915_execbuffer { struct eb_vma *batch; /** identity of the batch obj/vma */ struct i915_vma *trampoline; /** trampoline used for chaining */ + /* batch_index in vma list */ + unsigned int batch_index; + /** actual size of execobj[] as we may extend it for the cmdparser */ unsigned int buffer_count; @@ -361,6 +364,11 @@ static int eb_create(struct i915_execbuffer *eb) eb->lut_size = -eb->buffer_count; } + if (eb->args->flags & I915_EXEC_BATCH_FIRST) + eb->batch_index = 0; + else + eb->batch_index = eb->args->buffer_count - 1; + return 0; } @@ -735,14 +743,6 @@ static int eb_reserve(struct i915_execbuffer *eb) } while (1); } -static unsigned int eb_batch_index(const struct i915_execbuffer *eb) -{ - if (eb->args->flags & I915_EXEC_BATCH_FIRST) - return 0; - else - return eb->buffer_count - 1; -} - static int eb_select_context(struct i915_execbuffer *eb) { struct i915_gem_context *ctx; @@ -852,7 +852,6 @@ static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle) static int eb_lookup_vmas(struct i915_execbuffer *eb) { struct drm_i915_private *i915 = eb->i915; - unsigned int batch = eb_batch_index(eb); unsigned int i; int err = 0; @@ -873,7 +872,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) goto err; } - eb_add_vma(eb, i, batch, vma); + eb_add_vma(eb, i, eb->batch_index, vma); if (i915_gem_object_is_userptr(vma->obj)) { err = i915_gem_object_userptr_submit_init(vma->obj); From patchwork Tue Jul 20 20:57:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 603AAC07E95 for ; Tue, 20 Jul 2021 20:41:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 33FC060FEA for ; Tue, 20 Jul 2021 20:41:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33FC060FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C82C26E5C0; Tue, 20 Jul 2021 20:40:56 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0F0EA6E524; Tue, 20 Jul 2021 20:40:19 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909016" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909016" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906095" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 35/42] drm/i915: Allow callers of i915_gem_do_execbuffer to override the batch index Date: Tue, 20 Jul 2021 13:57:55 -0700 Message-Id: <20210720205802.39610-36-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Allow specifying the batch directly over what is inferred from passed in execbuf flags. Signed-off-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 1f1f477e46b4..707e12725f74 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3146,6 +3146,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, struct drm_file *file, struct drm_i915_gem_execbuffer2 *args, struct drm_i915_gem_exec_object2 *exec, + int batch_index, struct dma_fence *in_fence, struct dma_fence *exec_fence, struct dma_fence **out_fence) @@ -3202,6 +3203,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, GEM_BUG_ON(!eb.lut_size); + if (batch_index >= 0) + eb.batch_index = batch_index; + err = eb_select_context(&eb); if (unlikely(err)) goto err_destroy; @@ -3429,7 +3433,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, goto err_copy; } - err = i915_gem_do_execbuffer(dev, file, args, exec2_list, in_fence, + err = i915_gem_do_execbuffer(dev, file, args, exec2_list, -1, in_fence, exec_fence, out_fence_p); /* From patchwork Tue Jul 20 20:57:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5E9CC6377B for ; Tue, 20 Jul 2021 20:41:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7782760FEA for ; Tue, 20 Jul 2021 20:41:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7782760FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4DDDC6E588; Tue, 20 Jul 2021 20:40:35 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 89CA16E52C; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421890" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421890" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906096" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 36/42] drm/i915: Teach execbuf there can be more than one batch in the objects list Date: Tue, 20 Jul 2021 13:57:56 -0700 Message-Id: <20210720205802.39610-37-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" In case of multiple batches all batches will be at the beginning on the exec objects array or at the end based on the existing execbuffer2 flag. Batches not executed in the current execbuf call will not be processed for relocations or but will be pinned in same manner as the current batch. This will enable multiple do_execbuf calls with a single exec object array in a later patch. Suggested-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 31 +++++++++++++------ 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 707e12725f74..fe3242f4f2e6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -255,6 +255,9 @@ struct i915_execbuffer { /* batch_index in vma list */ unsigned int batch_index; + /* number of batches in execbuf IOCTL */ + unsigned int num_batches; + /** actual size of execobj[] as we may extend it for the cmdparser */ unsigned int buffer_count; @@ -511,6 +514,14 @@ static bool platform_has_relocs_enabled(const struct i915_execbuffer *eb) return false; } +static inline bool +is_batch_buffer(struct i915_execbuffer *eb, unsigned buffer_idx) +{ + return eb->args->flags & I915_EXEC_BATCH_FIRST ? + buffer_idx <= eb->num_batches : + buffer_idx >= eb->args->buffer_count - eb->num_batches; +} + static int eb_validate_vma(struct i915_execbuffer *eb, struct drm_i915_gem_exec_object2 *entry, @@ -562,11 +573,10 @@ eb_validate_vma(struct i915_execbuffer *eb, static void eb_add_vma(struct i915_execbuffer *eb, - unsigned int i, unsigned batch_idx, - struct i915_vma *vma) + unsigned int buffer_idx, struct i915_vma *vma) { - struct drm_i915_gem_exec_object2 *entry = &eb->exec[i]; - struct eb_vma *ev = &eb->vma[i]; + struct drm_i915_gem_exec_object2 *entry = &eb->exec[buffer_idx]; + struct eb_vma *ev = &eb->vma[buffer_idx]; ev->vma = vma; ev->exec = entry; @@ -591,14 +601,15 @@ eb_add_vma(struct i915_execbuffer *eb, * Note that actual hangs have only been observed on gen7, but for * paranoia do it everywhere. */ - if (i == batch_idx) { + if (is_batch_buffer(eb, buffer_idx)) { if (entry->relocation_count && !(ev->flags & EXEC_OBJECT_PINNED)) ev->flags |= __EXEC_OBJECT_NEEDS_BIAS; if (eb->reloc_cache.has_fence) ev->flags |= EXEC_OBJECT_NEEDS_FENCE; - eb->batch = ev; + if (buffer_idx == eb->batch_index) + eb->batch = ev; } } @@ -872,7 +883,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb) goto err; } - eb_add_vma(eb, i, eb->batch_index, vma); + eb_add_vma(eb, i, vma); if (i915_gem_object_is_userptr(vma->obj)) { err = i915_gem_object_userptr_submit_init(vma->obj); @@ -3147,6 +3158,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, struct drm_i915_gem_execbuffer2 *args, struct drm_i915_gem_exec_object2 *exec, int batch_index, + unsigned int num_batches, struct dma_fence *in_fence, struct dma_fence *exec_fence, struct dma_fence **out_fence) @@ -3203,6 +3215,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, GEM_BUG_ON(!eb.lut_size); + eb.num_batches = num_batches; if (batch_index >= 0) eb.batch_index = batch_index; @@ -3433,8 +3446,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, goto err_copy; } - err = i915_gem_do_execbuffer(dev, file, args, exec2_list, -1, in_fence, - exec_fence, out_fence_p); + err = i915_gem_do_execbuffer(dev, file, args, exec2_list, -1, 1, + in_fence, exec_fence, out_fence_p); /* * Now that we have begun execution of the batchbuffer, we ignore From patchwork Tue Jul 20 20:57:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEDC7C07E9B for ; Tue, 20 Jul 2021 20:41:51 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 84F3660FF3 for ; Tue, 20 Jul 2021 20:41:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84F3660FF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 53D706E5D2; Tue, 20 Jul 2021 20:41:02 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7CF456E52A; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885379" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885379" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906097" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 37/42] drm/i915: Only track object dependencies on first request Date: Tue, 20 Jul 2021 13:57:57 -0700 Message-Id: <20210720205802.39610-38-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Only track object dependencies on the first request generated from the execbuf, this help with the upcoming multi-bb execbuf extension. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index fe3242f4f2e6..1a91e21132be 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -2239,7 +2239,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb) return err; } -static int eb_move_to_gpu(struct i915_execbuffer *eb) +static int eb_move_to_gpu(struct i915_execbuffer *eb, bool first) { const unsigned int count = eb->buffer_count; unsigned int i = count; @@ -2281,7 +2281,7 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) flags &= ~EXEC_OBJECT_ASYNC; } - if (err == 0 && !(flags & EXEC_OBJECT_ASYNC)) { + if (err == 0 && first && !(flags & EXEC_OBJECT_ASYNC)) { err = i915_request_await_object (eb->request, obj, flags & EXEC_OBJECT_WRITE); } @@ -2525,14 +2525,15 @@ static int eb_parse(struct i915_execbuffer *eb) return err; } -static int eb_submit(struct i915_execbuffer *eb, struct i915_vma *batch) +static int eb_submit(struct i915_execbuffer *eb, struct i915_vma *batch, + bool first) { int err; if (intel_context_nopreempt(eb->context)) __set_bit(I915_FENCE_FLAG_NOPREEMPT, &eb->request->fence.flags); - err = eb_move_to_gpu(eb); + err = eb_move_to_gpu(eb, first); if (err) return err; @@ -3304,7 +3305,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, intel_gt_buffer_pool_mark_active(eb.batch_pool, eb.request); trace_i915_request_queue(eb.request, eb.batch_flags); - err = eb_submit(&eb, batch); + err = eb_submit(&eb, batch, true); err_request: i915_request_get(eb.request); From patchwork Tue Jul 20 20:57:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 710CFC07E9B for ; Tue, 20 Jul 2021 20:41:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 44ABC60FEA for ; Tue, 20 Jul 2021 20:41:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44ABC60FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 43D2D6E5B4; Tue, 20 Jul 2021 20:40:49 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3CC476E52F; Tue, 20 Jul 2021 20:40:19 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909017" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909017" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906098" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 38/42] drm/i915: Force parallel contexts to use copy engine for reloc Date: Tue, 20 Jul 2021 13:57:58 -0700 Message-Id: <20210720205802.39610-39-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Submitting to a subset of hardware contexts is not allowed, so use the copy engine for GPU relocations when using a parallel context. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 1a91e21132be..cf4288ce7141 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1386,7 +1386,8 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, if (err) goto err_unmap; - if (engine == eb->context->engine) { + if (engine == eb->context->engine && + !intel_context_is_parallel(eb->context)) { rq = i915_request_create(eb->context); } else { struct intel_context *ce = eb->reloc_context; @@ -1483,7 +1484,8 @@ static u32 *reloc_gpu(struct i915_execbuffer *eb, if (eb_use_cmdparser(eb)) return ERR_PTR(-EWOULDBLOCK); - if (!reloc_can_use_engine(engine)) { + if (!reloc_can_use_engine(engine) || + intel_context_is_parallel(eb->context)) { engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0]; if (!engine) return ERR_PTR(-ENODEV); From patchwork Tue Jul 20 20:57:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89469C636C8 for ; Tue, 20 Jul 2021 20:41:07 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5D90661004 for ; Tue, 20 Jul 2021 20:41:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D90661004 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B7E046E53E; Tue, 20 Jul 2021 20:40:28 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8BE136E52E; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885380" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885380" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:17 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906099" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 39/42] drm/i915: Multi-batch execbuffer2 Date: Tue, 20 Jul 2021 13:57:59 -0700 Message-Id: <20210720205802.39610-40-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For contexts with width set to two or more, we add a mode to execbuf2 which implies there are N batch buffers in the buffer list, each of which will be sent to one of the engines from the engine map array (I915_CONTEXT_PARAM_ENGINES, I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT). Those N batches can either be first N, or last N objects in the list as controlled by the existing execbuffer2 flag. The N batches will be submitted to consecutive engines from the previously configured allowed engine array starting at index 0. Input and output fences are fully supported, with the latter getting signalled when all batch buffers have completed. Last, it isn't safe for subsequent batches to touch any objects written to by a multi-BB submission until all the batches in that submission complete. As such all batches in a multi-BB submission must be combined into a single composite fence and put into the dma reseveration excl fence slot. Suggested-by: Tvrtko Ursulin Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 263 +++++++++++++++--- drivers/gpu/drm/i915/gt/intel_context.c | 5 + drivers/gpu/drm/i915/gt/intel_context_types.h | 9 + drivers/gpu/drm/i915/i915_vma.c | 13 +- drivers/gpu/drm/i915/i915_vma.h | 16 +- 5 files changed, 267 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index cf4288ce7141..1e3a8d9b965c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -252,6 +252,9 @@ struct i915_execbuffer { struct eb_vma *batch; /** identity of the batch obj/vma */ struct i915_vma *trampoline; /** trampoline used for chaining */ + /** used for excl fence in dma_resv objects when > 1 BB submitted */ + struct dma_fence *composite_fence; + /* batch_index in vma list */ unsigned int batch_index; @@ -367,11 +370,6 @@ static int eb_create(struct i915_execbuffer *eb) eb->lut_size = -eb->buffer_count; } - if (eb->args->flags & I915_EXEC_BATCH_FIRST) - eb->batch_index = 0; - else - eb->batch_index = eb->args->buffer_count - 1; - return 0; } @@ -2241,7 +2239,7 @@ static int eb_relocate_parse(struct i915_execbuffer *eb) return err; } -static int eb_move_to_gpu(struct i915_execbuffer *eb, bool first) +static int eb_move_to_gpu(struct i915_execbuffer *eb, bool first, bool last) { const unsigned int count = eb->buffer_count; unsigned int i = count; @@ -2289,8 +2287,16 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb, bool first) } if (err == 0) - err = i915_vma_move_to_active(vma, eb->request, - flags | __EXEC_OBJECT_NO_RESERVE); + err = _i915_vma_move_to_active(vma, eb->request, + flags | __EXEC_OBJECT_NO_RESERVE, + !last ? + NULL : + eb->composite_fence ? + eb->composite_fence : + &eb->request->fence, + eb->composite_fence ? + eb->composite_fence : + &eb->request->fence); } #ifdef CONFIG_MMU_NOTIFIER @@ -2528,14 +2534,14 @@ static int eb_parse(struct i915_execbuffer *eb) } static int eb_submit(struct i915_execbuffer *eb, struct i915_vma *batch, - bool first) + bool first, bool last) { int err; if (intel_context_nopreempt(eb->context)) __set_bit(I915_FENCE_FLAG_NOPREEMPT, &eb->request->fence.flags); - err = eb_move_to_gpu(eb, first); + err = eb_move_to_gpu(eb, first, last); if (err) return err; @@ -2748,7 +2754,7 @@ eb_select_legacy_ring(struct i915_execbuffer *eb) } static int -eb_select_engine(struct i915_execbuffer *eb) +eb_select_engine(struct i915_execbuffer *eb, unsigned int batch_number) { struct intel_context *ce; unsigned int idx; @@ -2763,6 +2769,18 @@ eb_select_engine(struct i915_execbuffer *eb) if (IS_ERR(ce)) return PTR_ERR(ce); + if (batch_number > 0) { + struct intel_context *parent = ce; + + GEM_BUG_ON(!intel_context_is_parent(parent)); + + for_each_child(parent, ce) + if (!--batch_number) + break; + intel_context_put(parent); + intel_context_get(ce); + } + intel_gt_pm_get(ce->engine->gt); if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) { @@ -3155,13 +3173,49 @@ parse_execbuf2_extensions(struct drm_i915_gem_execbuffer2 *args, eb); } +static int setup_composite_fence(struct i915_execbuffer *eb, + struct dma_fence **out_fence, + unsigned int num_batches) +{ + struct dma_fence_array *fence_array; + struct dma_fence **fences = kmalloc(num_batches * sizeof(*fences), + GFP_KERNEL); + struct intel_context *parent = intel_context_to_parent(eb->context); + int i; + + if (!fences) + return -ENOMEM; + + for (i = 0; i < num_batches; ++i) + fences[i] = out_fence[i]; + + fence_array = dma_fence_array_create(num_batches, + fences, + parent->fence_context, + ++parent->seqno, + false); + if (!fence_array) { + kfree(fences); + return -ENOMEM; + } + + /* Move ownership to the dma_fence_array created above */ + for (i = 0; i < num_batches; ++i) + dma_fence_get(fences[i]); + + eb->composite_fence = &fence_array->base; + + return 0; +} + static int i915_gem_do_execbuffer(struct drm_device *dev, struct drm_file *file, struct drm_i915_gem_execbuffer2 *args, struct drm_i915_gem_exec_object2 *exec, - int batch_index, + unsigned int batch_index, unsigned int num_batches, + unsigned int batch_number, struct dma_fence *in_fence, struct dma_fence *exec_fence, struct dma_fence **out_fence) @@ -3170,6 +3224,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, struct i915_execbuffer eb; struct i915_vma *batch; int err; + bool first = batch_number == 0; + bool last = batch_number + 1 == num_batches; BUILD_BUG_ON(__EXEC_INTERNAL_FLAGS & ~__I915_EXEC_ILLEGAL_FLAGS); BUILD_BUG_ON(__EXEC_OBJECT_INTERNAL_FLAGS & @@ -3194,6 +3250,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, eb.batch_start_offset = args->batch_start_offset; eb.batch_len = args->batch_len; eb.trampoline = NULL; + eb.composite_fence = NULL; eb.fences = NULL; eb.num_fences = 0; @@ -3219,14 +3276,13 @@ i915_gem_do_execbuffer(struct drm_device *dev, GEM_BUG_ON(!eb.lut_size); eb.num_batches = num_batches; - if (batch_index >= 0) - eb.batch_index = batch_index; + eb.batch_index = batch_index; err = eb_select_context(&eb); if (unlikely(err)) goto err_destroy; - err = eb_select_engine(&eb); + err = eb_select_engine(&eb, batch_number); if (unlikely(err)) goto err_context; @@ -3282,6 +3338,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, goto err_request; } + if (last) + set_bit(I915_FENCE_FLAG_SUBMIT_PARALLEL, + &eb.request->fence.flags); + if (in_fence) { err = i915_request_await_dma_fence(eb.request, in_fence); @@ -3295,6 +3355,23 @@ i915_gem_do_execbuffer(struct drm_device *dev, goto err_request; } + if (out_fence) { + /* Move ownership to caller (i915_gem_execbuffer2_ioctl) */ + out_fence[batch_number] = dma_fence_get(&eb.request->fence); + + /* + * Need to create a composite fence (dma_fence_array, + * eb.composite_fence) for the excl fence of the dma_resv + * objects as each BB can write to the object. Since we create + */ + if (num_batches > 1 && last) { + err = setup_composite_fence(&eb, out_fence, + num_batches); + if (err < 0) + goto err_request; + } + } + /* * Whilst this request exists, batch_obj will be on the * active_list, and so will hold the active reference. Only when this @@ -3307,7 +3384,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, intel_gt_buffer_pool_mark_active(eb.batch_pool, eb.request); trace_i915_request_queue(eb.request, eb.batch_flags); - err = eb_submit(&eb, batch, true); + err = eb_submit(&eb, batch, first, last); err_request: i915_request_get(eb.request); @@ -3316,8 +3393,14 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (eb.fences) signal_fence_array(&eb); - if (!err && out_fence) - *out_fence = dma_fence_get(&eb.request->fence); + /* + * Ownership of the composite fence (dma_fence_array, + * eb.composite_fence) has been moved to the dma_resv objects these BB + * write to in i915_vma_move_to_active. It is ok to release the creation + * reference of this fence now. + */ + if (eb.composite_fence) + dma_fence_put(eb.composite_fence); if (unlikely(eb.gem_context->syncobj)) { drm_syncobj_replace_fence(eb.gem_context->syncobj, @@ -3368,6 +3451,17 @@ static bool check_buffer_count(size_t count) return !(count < 1 || count > INT_MAX || count > SIZE_MAX / sz - 1); } +/* Release fences from the dma_fence_get in i915_gem_do_execbuffer. */ +static inline void put_out_fences(struct dma_fence **out_fences, + unsigned int num_batches) +{ + int i; + + for (i = 0; i < num_batches; ++i) + if (out_fences[i]) + dma_fence_put(out_fences[i]); +} + int i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_file *file) @@ -3375,13 +3469,16 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_execbuffer2 *args = data; struct drm_i915_gem_exec_object2 *exec2_list; - struct dma_fence **out_fence_p = NULL; - struct dma_fence *out_fence = NULL; + struct dma_fence **out_fences = NULL; struct dma_fence *in_fence = NULL; struct dma_fence *exec_fence = NULL; int out_fence_fd = -1; const size_t count = args->buffer_count; int err; + struct i915_gem_context *ctx; + struct intel_context *parent = NULL; + unsigned int num_batches = 1, i; + bool is_parallel = false; if (!check_buffer_count(count)) { drm_dbg(&i915->drm, "execbuf2 with %zd buffers\n", count); @@ -3404,10 +3501,39 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, if (err) return err; + ctx = i915_gem_context_lookup(file->driver_priv, args->rsvd1); + if (unlikely(IS_ERR(ctx))) + return PTR_ERR(ctx); + + if (i915_gem_context_user_engines(ctx)) { + parent = i915_gem_context_get_engine(ctx, args->flags & + I915_EXEC_RING_MASK); + if (IS_ERR(parent)) { + err = PTR_ERR(parent); + goto err_context; + } + + if (intel_context_is_parent(parent)) { + if (args->batch_len) { + err = -EINVAL; + goto err_context; + } + + num_batches = parent->guc_number_children + 1; + if (num_batches > count) { + i915_gem_context_put(ctx); + goto err_parent; + } + is_parallel = true; + } + } + if (args->flags & I915_EXEC_FENCE_IN) { in_fence = sync_file_get_fence(lower_32_bits(args->rsvd2)); - if (!in_fence) - return -EINVAL; + if (!in_fence) { + err = -EINVAL; + goto err_parent; + } } if (args->flags & I915_EXEC_FENCE_SUBMIT) { @@ -3423,15 +3549,28 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, } } - if (args->flags & I915_EXEC_FENCE_OUT) { - out_fence_fd = get_unused_fd_flags(O_CLOEXEC); - if (out_fence_fd < 0) { - err = out_fence_fd; + /* + * We always allocate out fences when doing multi-BB submission as + * this is required to create an excl fence for any dma buf objects + * these BBs touch. + */ + if (args->flags & I915_EXEC_FENCE_OUT || is_parallel) { + out_fences = kcalloc(num_batches, sizeof(*out_fences), + GFP_KERNEL); + if (!out_fences) { + err = -ENOMEM; goto err_out_fence; } - out_fence_p = &out_fence; + if (args->flags & I915_EXEC_FENCE_OUT) { + out_fence_fd = get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + err = out_fence_fd; + goto err_out_fence; + } + } } + /* Allocate extra slots for use by the command parser */ exec2_list = kvmalloc_array(count + 2, eb_element_size(), __GFP_NOWARN | GFP_KERNEL); @@ -3449,8 +3588,35 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, goto err_copy; } - err = i915_gem_do_execbuffer(dev, file, args, exec2_list, -1, 1, - in_fence, exec_fence, out_fence_p); + /* + * Downstream submission code expects all parallel submissions to occur + * in intel_context sequence, thus only 1 submission can happen at a + * time. + */ + if (is_parallel) + mutex_lock(&parent->parallel_submit); + + err = i915_gem_do_execbuffer(dev, file, args, exec2_list, + args->flags & I915_EXEC_BATCH_FIRST ? + 0 : count - num_batches, + num_batches, + 0, + in_fence, + exec_fence, + out_fences); + + for (i = 1; err == 0 && i < num_batches; i++) + err = i915_gem_do_execbuffer(dev, file, args, exec2_list, + args->flags & I915_EXEC_BATCH_FIRST ? + i : count - num_batches + i, + num_batches, + i, + NULL, + NULL, + out_fences); + + if (is_parallel) + mutex_unlock(&parent->parallel_submit); /* * Now that we have begun execution of the batchbuffer, we ignore @@ -3491,8 +3657,31 @@ end:; } if (!err && out_fence_fd >= 0) { + struct dma_fence *out_fence = NULL; struct sync_file *sync_fence; + if (is_parallel) { + struct dma_fence_array *fence_array; + + /* + * The dma_fence_array now owns out_fences (from + * dma_fence_get in i915_gem_do_execbuffer) assuming + * successful creation of dma_fence_array. + */ + fence_array = dma_fence_array_create(num_batches, + out_fences, + parent->fence_context, + ++parent->seqno, + false); + if (!fence_array) + goto put_out_fences; + + out_fence = &fence_array->base; + out_fences = NULL; + } else { + out_fence = out_fences[0]; + } + sync_fence = sync_file_create(out_fence); if (sync_fence) { fd_install(out_fence_fd, sync_fence->file); @@ -3500,9 +3689,15 @@ end:; args->rsvd2 |= (u64)out_fence_fd << 32; out_fence_fd = -1; } + + /* + * The sync_file now owns out_fence, drop the creation + * reference. + */ dma_fence_put(out_fence); - } else if (out_fence) { - dma_fence_put(out_fence); + } else if (out_fences) { +put_out_fences: + put_out_fences(out_fences, num_batches); } args->flags &= ~__I915_EXEC_UNKNOWN_FLAGS; @@ -3513,9 +3708,15 @@ end:; if (out_fence_fd >= 0) put_unused_fd(out_fence_fd); err_out_fence: + kfree(out_fences); dma_fence_put(exec_fence); err_exec_fence: dma_fence_put(in_fence); +err_parent: + if (parent) + intel_context_put(parent); +err_context: + i915_gem_context_put(ctx); return err; } diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 35f5d28c7465..ab69ba9bb9aa 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -475,6 +475,9 @@ intel_context_init(struct intel_context *ce, struct intel_engine_cs *engine) ce->guc_id = GUC_INVALID_LRC_ID; INIT_LIST_HEAD(&ce->guc_id_link); + mutex_init(&ce->parallel_submit); + ce->fence_context = dma_fence_context_alloc(1); + i915_sw_fence_init(&ce->guc_blocked, sw_fence_dummy_notify); i915_sw_fence_commit(&ce->guc_blocked); @@ -497,6 +500,8 @@ void intel_context_fini(struct intel_context *ce) for_each_child_safe(ce, child, next) intel_context_put(child); + mutex_destroy(&ce->parallel_submit); + mutex_destroy(&ce->pin_mutex); i915_active_fini(&ce->active); } diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index 4886e107c1f8..e5d4cc113358 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -235,6 +235,15 @@ struct intel_context { /* Last request submitted on a parent */ struct i915_request *last_rq; + + /* Parallel submission mutex */ + struct mutex parallel_submit; + + /* Fence context for parallel submission */ + u64 fence_context; + + /* Seqno for parallel submission */ + u32 seqno; }; #endif /* __INTEL_CONTEXT_TYPES__ */ diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 5b9dce0f443b..e0d4d1162db5 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1238,9 +1238,11 @@ int __i915_vma_move_to_active(struct i915_vma *vma, struct i915_request *rq) return i915_active_add_request(&vma->active, rq); } -int i915_vma_move_to_active(struct i915_vma *vma, - struct i915_request *rq, - unsigned int flags) +int _i915_vma_move_to_active(struct i915_vma *vma, + struct i915_request *rq, + unsigned int flags, + struct dma_fence *shared_fence, + struct dma_fence *excl_fence) { struct drm_i915_gem_object *obj = vma->obj; int err; @@ -1261,7 +1263,7 @@ int i915_vma_move_to_active(struct i915_vma *vma, intel_frontbuffer_put(front); } - dma_resv_add_excl_fence(vma->resv, &rq->fence); + dma_resv_add_excl_fence(vma->resv, excl_fence); obj->write_domain = I915_GEM_DOMAIN_RENDER; obj->read_domains = 0; } else { @@ -1271,7 +1273,8 @@ int i915_vma_move_to_active(struct i915_vma *vma, return err; } - dma_resv_add_shared_fence(vma->resv, &rq->fence); + if (shared_fence) + dma_resv_add_shared_fence(vma->resv, shared_fence); obj->write_domain = 0; } diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index eca452a9851f..297587c9f303 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -57,9 +57,19 @@ static inline bool i915_vma_is_active(const struct i915_vma *vma) int __must_check __i915_vma_move_to_active(struct i915_vma *vma, struct i915_request *rq); -int __must_check i915_vma_move_to_active(struct i915_vma *vma, - struct i915_request *rq, - unsigned int flags); + +int __must_check _i915_vma_move_to_active(struct i915_vma *vma, + struct i915_request *rq, + unsigned int flags, + struct dma_fence *shared_fence, + struct dma_fence *excl_fence); +static inline int __must_check +i915_vma_move_to_active(struct i915_vma *vma, + struct i915_request *rq, + unsigned int flags) +{ + return _i915_vma_move_to_active(vma, rq, flags, &rq->fence, &rq->fence); +} #define __i915_vma_flags(v) ((unsigned long *)&(v)->flags.counter) From patchwork Tue Jul 20 20:58:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4836EC636CA for ; Tue, 20 Jul 2021 20:41:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 19DF660FEA for ; Tue, 20 Jul 2021 20:41:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19DF660FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A369C6E5BB; Tue, 20 Jul 2021 20:40:56 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id BB4816E536; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="209421892" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="209421892" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:18 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906100" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 40/42] drm/i915: Eliminate unnecessary VMA calls for multi-BB submission Date: Tue, 20 Jul 2021 13:58:00 -0700 Message-Id: <20210720205802.39610-41-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Certain VMA functions in the execbuf IOCTL only need to be called on first or last BB of a multi-BB submission. eb_relocate() on the first and eb_release_vmas() on the last. Doing so will save CPU / GPU cycles. Signed-off-by: Matthew Brost --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 123 +++++++++++------- .../i915/gem/selftests/i915_gem_execbuffer.c | 14 +- 2 files changed, 81 insertions(+), 56 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 1e3a8d9b965c..df2a95e80442 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -270,7 +270,7 @@ struct i915_execbuffer { /** list of vma that have execobj.relocation_count */ struct list_head relocs; - struct i915_gem_ww_ctx ww; + struct i915_gem_ww_ctx *ww; /** * Track the most recently used object for relocations, as we @@ -448,7 +448,7 @@ eb_pin_vma(struct i915_execbuffer *eb, pin_flags |= PIN_GLOBAL; /* Attempt to reuse the current location if available */ - err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, pin_flags); + err = i915_vma_pin_ww(vma, eb->ww, 0, 0, pin_flags); if (err == -EDEADLK) return err; @@ -457,11 +457,11 @@ eb_pin_vma(struct i915_execbuffer *eb, return err; /* Failing that pick any _free_ space if suitable */ - err = i915_vma_pin_ww(vma, &eb->ww, - entry->pad_to_size, - entry->alignment, - eb_pin_flags(entry, ev->flags) | - PIN_USER | PIN_NOEVICT); + err = i915_vma_pin_ww(vma, eb->ww, + entry->pad_to_size, + entry->alignment, + eb_pin_flags(entry, ev->flags) | + PIN_USER | PIN_NOEVICT); if (unlikely(err)) return err; } @@ -643,7 +643,7 @@ static int eb_reserve_vma(struct i915_execbuffer *eb, return err; } - err = i915_vma_pin_ww(vma, &eb->ww, + err = i915_vma_pin_ww(vma, eb->ww, entry->pad_to_size, entry->alignment, eb_pin_flags(entry, ev->flags) | pin_flags); if (err) @@ -940,7 +940,7 @@ static int eb_lock_vmas(struct i915_execbuffer *eb) struct eb_vma *ev = &eb->vma[i]; struct i915_vma *vma = ev->vma; - err = i915_gem_object_lock(vma->obj, &eb->ww); + err = i915_gem_object_lock(vma->obj, eb->ww); if (err) return err; } @@ -1020,12 +1020,13 @@ eb_get_vma(const struct i915_execbuffer *eb, unsigned long handle) } } -static void eb_release_vmas(struct i915_execbuffer *eb, bool final) +static void eb_release_vmas(struct i915_execbuffer *eb, bool final, + bool unreserve) { const unsigned int count = eb->buffer_count; unsigned int i; - for (i = 0; i < count; i++) { + for (i = 0; unreserve && i < count; i++) { struct eb_vma *ev = &eb->vma[i]; struct i915_vma *vma = ev->vma; @@ -1237,7 +1238,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj, if (err) return ERR_PTR(err); - vma = i915_gem_object_ggtt_pin_ww(obj, &eb->ww, NULL, 0, 0, + vma = i915_gem_object_ggtt_pin_ww(obj, eb->ww, NULL, 0, 0, PIN_MAPPABLE | PIN_NONBLOCK /* NOWARN */ | PIN_NOEVICT); @@ -1361,7 +1362,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, } eb->reloc_pool = NULL; - err = i915_gem_object_lock(pool->obj, &eb->ww); + err = i915_gem_object_lock(pool->obj, eb->ww); if (err) goto err_pool; @@ -1380,7 +1381,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, goto err_unmap; } - err = i915_vma_pin_ww(batch, &eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK); + err = i915_vma_pin_ww(batch, eb->ww, 0, 0, PIN_USER | PIN_NONBLOCK); if (err) goto err_unmap; @@ -1402,7 +1403,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, eb->reloc_context = ce; } - err = intel_context_pin_ww(ce, &eb->ww); + err = intel_context_pin_ww(ce, eb->ww); if (err) goto err_unpin; @@ -2017,8 +2018,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, } /* We may process another execbuffer during the unlock... */ - eb_release_vmas(eb, false); - i915_gem_ww_ctx_fini(&eb->ww); + eb_release_vmas(eb, false, true); + i915_gem_ww_ctx_fini(eb->ww); if (rq) { /* nonblocking is always false */ @@ -2062,7 +2063,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, err = eb_reinit_userptr(eb); err_relock: - i915_gem_ww_ctx_init(&eb->ww, true); + i915_gem_ww_ctx_init(eb->ww, true); if (err) goto out; @@ -2119,8 +2120,8 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, err: if (err == -EDEADLK) { - eb_release_vmas(eb, false); - err = i915_gem_ww_ctx_backoff(&eb->ww); + eb_release_vmas(eb, false, true); + err = i915_gem_ww_ctx_backoff(eb->ww); if (!err) goto repeat_validate; } @@ -2152,7 +2153,7 @@ static noinline int eb_relocate_parse_slow(struct i915_execbuffer *eb, return err; } -static int eb_relocate_parse(struct i915_execbuffer *eb) +static int eb_relocate_parse(struct i915_execbuffer *eb, bool first) { int err; struct i915_request *rq = NULL; @@ -2189,14 +2190,16 @@ static int eb_relocate_parse(struct i915_execbuffer *eb) /* only throttle once, even if we didn't need to throttle */ throttle = false; - err = eb_validate_vmas(eb); - if (err == -EAGAIN) - goto slow; - else if (err) - goto err; + if (first) { + err = eb_validate_vmas(eb); + if (err == -EAGAIN) + goto slow; + else if (err) + goto err; + } /* The objects are in their final locations, apply the relocations. */ - if (eb->args->flags & __EXEC_HAS_RELOC) { + if (eb->args->flags & __EXEC_HAS_RELOC && first) { struct eb_vma *ev; list_for_each_entry(ev, &eb->relocs, reloc_link) { @@ -2211,13 +2214,13 @@ static int eb_relocate_parse(struct i915_execbuffer *eb) goto slow; } - if (!err) + if (!err && first) err = eb_parse(eb); err: if (err == -EDEADLK) { - eb_release_vmas(eb, false); - err = i915_gem_ww_ctx_backoff(&eb->ww); + eb_release_vmas(eb, false, true); + err = i915_gem_ww_ctx_backoff(eb->ww); if (!err) goto retry; } @@ -2398,7 +2401,7 @@ shadow_batch_pin(struct i915_execbuffer *eb, if (IS_ERR(vma)) return vma; - err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, flags); + err = i915_vma_pin_ww(vma, eb->ww, 0, 0, flags); if (err) return ERR_PTR(err); @@ -2412,7 +2415,7 @@ static struct i915_vma *eb_dispatch_secure(struct i915_execbuffer *eb, struct i9 * batch" bit. Hence we need to pin secure batches into the global gtt. * hsw should have this fixed, but bdw mucks it up again. */ if (eb->batch_flags & I915_DISPATCH_SECURE) - return i915_gem_object_ggtt_pin_ww(vma->obj, &eb->ww, NULL, 0, 0, 0); + return i915_gem_object_ggtt_pin_ww(vma->obj, eb->ww, NULL, 0, 0, 0); return NULL; } @@ -2458,7 +2461,7 @@ static int eb_parse(struct i915_execbuffer *eb) eb->batch_pool = pool; } - err = i915_gem_object_lock(pool->obj, &eb->ww); + err = i915_gem_object_lock(pool->obj, eb->ww); if (err) goto err; @@ -2666,7 +2669,7 @@ static struct i915_request *eb_pin_engine(struct i915_execbuffer *eb, bool throt * GGTT space, so do this first before we reserve a seqno for * ourselves. */ - err = intel_context_pin_ww(ce, &eb->ww); + err = intel_context_pin_ww(ce, eb->ww); if (err) return ERR_PTR(err); @@ -3218,7 +3221,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, unsigned int batch_number, struct dma_fence *in_fence, struct dma_fence *exec_fence, - struct dma_fence **out_fence) + struct dma_fence **out_fence, + struct i915_gem_ww_ctx *ww) { struct drm_i915_private *i915 = to_i915(dev); struct i915_execbuffer eb; @@ -3239,7 +3243,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, eb.exec = exec; eb.vma = (struct eb_vma *)(exec + args->buffer_count + 1); - eb.vma[0].vma = NULL; + if (first) + eb.vma[0].vma = NULL; eb.reloc_pool = eb.batch_pool = NULL; eb.reloc_context = NULL; @@ -3251,6 +3256,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, eb.batch_len = args->batch_len; eb.trampoline = NULL; eb.composite_fence = NULL; + eb.ww = ww; eb.fences = NULL; eb.num_fences = 0; @@ -3269,9 +3275,14 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (err) goto err_ext; - err = eb_create(&eb); - if (err) - goto err_ext; + if (first) { + err = eb_create(&eb); + if (err) + goto err_ext; + } else { + eb.lut_size = -eb.buffer_count; + } + GEM_BUG_ON(!eb.lut_size); @@ -3286,15 +3297,22 @@ i915_gem_do_execbuffer(struct drm_device *dev, if (unlikely(err)) goto err_context; - err = eb_lookup_vmas(&eb); - if (err) { - eb_release_vmas(&eb, true); - goto err_engine; + if (first) { + err = eb_lookup_vmas(&eb); + if (err) { + eb_release_vmas(&eb, true, true); + goto err_engine; + } + + } else { + eb.batch = &eb.vma[eb.batch_index]; } - i915_gem_ww_ctx_init(&eb.ww, true); - err = eb_relocate_parse(&eb); + if (first) + i915_gem_ww_ctx_init(eb.ww, true); + + err = eb_relocate_parse(&eb, first); if (err) { /* * If the user expects the execobject.offset and @@ -3307,7 +3325,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, goto err_vma; } - ww_acquire_done(&eb.ww.ctx); + if (first) + ww_acquire_done(&eb.ww->ctx); batch = eb.batch->vma; @@ -3410,11 +3429,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, i915_request_put(eb.request); err_vma: - eb_release_vmas(&eb, true); + eb_release_vmas(&eb, true, err || last); if (eb.trampoline) i915_vma_unpin(eb.trampoline); WARN_ON(err == -EDEADLK); - i915_gem_ww_ctx_fini(&eb.ww); + if (err || last) + i915_gem_ww_ctx_fini(eb.ww); if (eb.batch_pool) intel_gt_buffer_pool_put(eb.batch_pool); @@ -3476,6 +3496,7 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, const size_t count = args->buffer_count; int err; struct i915_gem_context *ctx; + struct i915_gem_ww_ctx ww; struct intel_context *parent = NULL; unsigned int num_batches = 1, i; bool is_parallel = false; @@ -3603,7 +3624,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, 0, in_fence, exec_fence, - out_fences); + out_fences, + &ww); for (i = 1; err == 0 && i < num_batches; i++) err = i915_gem_do_execbuffer(dev, file, args, exec2_list, @@ -3613,7 +3635,8 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, i, NULL, NULL, - out_fences); + out_fences, + &ww); if (is_parallel) mutex_unlock(&parent->parallel_submit); diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c index 16162fc2782d..710d2700e5b4 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c @@ -32,11 +32,11 @@ static int __igt_gpu_reloc(struct i915_execbuffer *eb, if (IS_ERR(vma)) return PTR_ERR(vma); - err = i915_gem_object_lock(obj, &eb->ww); + err = i915_gem_object_lock(obj, eb->ww); if (err) return err; - err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, PIN_USER | PIN_HIGH); + err = i915_vma_pin_ww(vma, eb->ww, 0, 0, PIN_USER | PIN_HIGH); if (err) return err; @@ -106,10 +106,12 @@ static int __igt_gpu_reloc(struct i915_execbuffer *eb, static int igt_gpu_reloc(void *arg) { struct i915_execbuffer eb; + struct i915_gem_ww_ctx ww; struct drm_i915_gem_object *scratch; int err = 0; u32 *map; + eb.ww = &ww; eb.i915 = arg; scratch = i915_gem_object_create_internal(eb.i915, 4096); @@ -141,20 +143,20 @@ static int igt_gpu_reloc(void *arg) eb.reloc_pool = NULL; eb.reloc_context = NULL; - i915_gem_ww_ctx_init(&eb.ww, false); + i915_gem_ww_ctx_init(eb.ww, false); retry: - err = intel_context_pin_ww(eb.context, &eb.ww); + err = intel_context_pin_ww(eb.context, eb.ww); if (!err) { err = __igt_gpu_reloc(&eb, scratch); intel_context_unpin(eb.context); } if (err == -EDEADLK) { - err = i915_gem_ww_ctx_backoff(&eb.ww); + err = i915_gem_ww_ctx_backoff(eb.ww); if (!err) goto retry; } - i915_gem_ww_ctx_fini(&eb.ww); + i915_gem_ww_ctx_fini(eb.ww); if (eb.reloc_pool) intel_gt_buffer_pool_put(eb.reloc_pool); From patchwork Tue Jul 20 20:58:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D664C6377E for ; Tue, 20 Jul 2021 20:41:02 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 46B2361004 for ; Tue, 20 Jul 2021 20:41:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 46B2361004 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C48386E546; Tue, 20 Jul 2021 20:40:28 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3FC1F6E53E; Tue, 20 Jul 2021 20:40:19 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="190909018" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="190909018" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:18 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906101" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 41/42] drm/i915: Enable multi-bb execbuf Date: Tue, 20 Jul 2021 13:58:01 -0700 Message-Id: <20210720205802.39610-42-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Enable multi-bb execbuf by enabling the set_parallel extension. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 805fbcde03ea..23befe264f35 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -533,9 +533,6 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base, struct intel_engine_cs **siblings = NULL; intel_engine_mask_t prev_mask; - /* Disabling for now */ - return -ENODEV; - if (!(intel_uc_uses_guc_submission(&i915->gt.uc))) return -ENODEV; From patchwork Tue Jul 20 20:58:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12389381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90C1BC636C8 for ; Tue, 20 Jul 2021 20:41:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60EF260FEA for ; Tue, 20 Jul 2021 20:41:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60EF260FEA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CD2D96E5A3; Tue, 20 Jul 2021 20:40:49 +0000 (UTC) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id B25C46E532; Tue, 20 Jul 2021 20:40:18 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10051"; a="296885381" X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="296885381" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:18 -0700 X-IronPort-AV: E=Sophos;i="5.84,256,1620716400"; d="scan'208";a="414906102" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2021 13:40:16 -0700 From: Matthew Brost To: , Subject: [RFC PATCH 42/42] drm/i915/execlists: Parallel submission support for execlists Date: Tue, 20 Jul 2021 13:58:02 -0700 Message-Id: <20210720205802.39610-43-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210720205802.39610-1-matthew.brost@intel.com> References: <20210720205802.39610-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" A weak implmentation of parallel submission (multi-bb execbuf IOCTL) for execlists. Basically doing as little as possible to support this interface for execlists - basically just passing submit fences between each request generated and virtual engines are not allowed. This is on par with what is there for the existing (hopefully soon deprecated) bonding interface. As with the GuC implementation the pinning interface laying is broken. This will get cleaned up once the pre_pin / post_unpin laying violations get fixed. Rather than try to fix it here, just do what the GuC is doing in the meantime. Signed-off-by: Matthew Brost --- drivers/gpu/drm/i915/gem/i915_gem_context.c | 9 +- drivers/gpu/drm/i915/gt/intel_context.c | 1 - drivers/gpu/drm/i915/gt/intel_context_types.h | 2 + .../drm/i915/gt/intel_execlists_submission.c | 196 ++++++++++++++++++ 4 files changed, 204 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 23befe264f35..8b595ec006a7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -533,9 +533,6 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base, struct intel_engine_cs **siblings = NULL; intel_engine_mask_t prev_mask; - if (!(intel_uc_uses_guc_submission(&i915->gt.uc))) - return -ENODEV; - if (get_user(slot, &ext->engine_index)) return -EFAULT; @@ -545,6 +542,12 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base, if (get_user(num_siblings, &ext->num_siblings)) return -EFAULT; + if (!intel_uc_uses_guc_submission(&i915->gt.uc) && num_siblings != 1) { + drm_dbg(&i915->drm, "Only 1 sibling (%d) supported in non-GuC mode\n", + num_siblings); + return -EINVAL; + } + if (slot >= set->num_engines) { drm_dbg(&i915->drm, "Invalid placement value, %d >= %d\n", slot, set->num_engines); diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index ab69ba9bb9aa..5187a75d4e04 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -637,7 +637,6 @@ void intel_context_bind_parent_child(struct intel_context *parent, * Callers responsibility to validate that this function is used * correctly but we use GEM_BUG_ON here ensure that they do. */ - GEM_BUG_ON(!intel_engine_uses_guc(parent->engine)); GEM_BUG_ON(intel_context_is_pinned(parent)); GEM_BUG_ON(intel_context_is_child(parent)); GEM_BUG_ON(intel_context_is_pinned(child)); diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h index e5d4cc113358..f66f54820c93 100644 --- a/drivers/gpu/drm/i915/gt/intel_context_types.h +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h @@ -116,6 +116,8 @@ struct intel_context { #define CONTEXT_FORCE_SINGLE_SUBMISSION 7 #define CONTEXT_NOPREEMPT 8 #define CONTEXT_LRCA_DIRTY 9 +#define CONTEXT_PIN_TO_PARENT 10 +#define CONTEXT_PIN_CHILDREN 11 struct { u64 timeout_us; diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 87ce28320098..f25e0af8b184 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -2551,6 +2551,201 @@ static void execlists_context_cancel_request(struct intel_context *ce, current->comm); } +static int execlists_parent_context_pre_pin(struct intel_context *ce, + struct i915_gem_ww_ctx *ww) +{ + struct intel_context *child; + int err, i = 0, j = 0; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + for_each_child(ce, child) { + err = i915_active_acquire(&child->active); + if (unlikely(err)) + goto unwind_active; + ++i; + } + + for_each_child(ce, child) { + err = __execlists_context_pre_pin(child, child->engine, ww); + if (unlikely(err)) + goto unwind_pre_pin; + ++j; + } + + err = __execlists_context_pre_pin(ce, ce->engine, ww); + if (unlikely(err)) + goto unwind_pre_pin; + + return 0; + +unwind_pre_pin: + for_each_child(ce, child) { + if (!j--) + break; + lrc_post_unpin(child); + } + +unwind_active: + for_each_child(ce, child) { + if (!i--) + break; + i915_active_release(&child->active); + } + + return err; +} + +static void execlists_parent_context_post_unpin(struct intel_context *ce) +{ + struct intel_context *child; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + for_each_child(ce, child) + lrc_post_unpin(child); + lrc_post_unpin(ce); + + for_each_child(ce, child) { + intel_context_get(child); + i915_active_release(&child->active); + intel_context_put(child); + } +} + +static int execlists_parent_context_pin(struct intel_context *ce) +{ + int ret, i = 0, j = 0; + struct intel_context *child; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + for_each_child(ce, child) { + ret = lrc_pin(child, child->engine); + if (unlikely(ret)) + goto unwind_pin; + ++i; + } + ret = lrc_pin(ce, ce->engine); + if (unlikely(ret)) + goto unwind_pin; + + return 0; + +unwind_pin: + for_each_child(ce, child) { + if (++j > i) + break; + lrc_unpin(child); + } + + return ret; +} + +static void execlists_parent_context_unpin(struct intel_context *ce) +{ + struct intel_context *child; + + GEM_BUG_ON(!intel_context_is_parent(ce)); + + for_each_child(ce, child) + lrc_unpin(child); + lrc_unpin(ce); +} + +static const struct intel_context_ops parent_context_ops = { + .flags = COPS_HAS_INFLIGHT, + + .alloc = execlists_context_alloc, + + .cancel_request = execlists_context_cancel_request, + + .pre_pin = execlists_parent_context_pre_pin, + .pin = execlists_parent_context_pin, + .unpin = execlists_parent_context_unpin, + .post_unpin = execlists_parent_context_post_unpin, + + .enter = intel_context_enter_engine, + .exit = intel_context_exit_engine, + + .destroy = lrc_destroy, +}; + +static const struct intel_context_ops child_context_ops = { + .flags = COPS_HAS_INFLIGHT, + + .alloc = execlists_context_alloc, + + .cancel_request = execlists_context_cancel_request, + + .enter = intel_context_enter_engine, + .exit = intel_context_exit_engine, + + .destroy = lrc_destroy, +}; + +static struct intel_context * +execlists_create_parallel(struct intel_engine_cs **engines, + unsigned int num_siblings, + unsigned int width) +{ + struct intel_engine_cs **siblings = NULL; + struct intel_context *parent = NULL, *ce, *err; + int i, j; + int ret; + + GEM_BUG_ON(num_siblings != 1); + + siblings = kmalloc_array(num_siblings, + sizeof(*siblings), + GFP_KERNEL); + if (!siblings) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < width; ++i) { + for (j = 0; j < num_siblings; ++j) + siblings[j] = engines[i * num_siblings + j]; + + ce = intel_context_create(siblings[0]); + if (!ce) { + err = ERR_PTR(-ENOMEM); + goto unwind; + } + + if (i == 0) { + parent = ce; + } else { + intel_context_bind_parent_child(parent, ce); + ret = intel_context_alloc_state(ce); + if (ret) { + err = ERR_PTR(ret); + goto unwind; + } + } + } + + __set_bit(CONTEXT_NOPREEMPT, &parent->flags); + for_each_child(parent, ce) + __set_bit(CONTEXT_NOPREEMPT, &ce->flags); + + parent->ops = &parent_context_ops; + for_each_child(parent, ce) + ce->ops = &child_context_ops; + + + kfree(siblings); + return parent; + +unwind: + if (parent) { + for_each_child(parent, ce) + intel_context_put(ce); + intel_context_put(parent); + } + kfree(siblings); + return err; +} + static const struct intel_context_ops execlists_context_ops = { .flags = COPS_HAS_INFLIGHT, @@ -2569,6 +2764,7 @@ static const struct intel_context_ops execlists_context_ops = { .reset = lrc_reset, .destroy = lrc_destroy, + .create_parallel = execlists_create_parallel, .create_virtual = execlists_create_virtual, };