From patchwork Wed Feb 3 16:52:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 12064833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 166B6C4332E for ; Wed, 3 Feb 2021 16:53:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B32764F7C for ; Wed, 3 Feb 2021 16:53:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B32764F7C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=chris-wilson.co.uk Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 849F16EB48; Wed, 3 Feb 2021 16:53:08 +0000 (UTC) Received: from fireflyinternet.com (unknown [77.68.26.236]) by gabe.freedesktop.org (Postfix) with ESMTPS id 760AF6EB46 for ; Wed, 3 Feb 2021 16:53:04 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from build.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 23778159-1500050 for ; Wed, 03 Feb 2021 16:52:58 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 3 Feb 2021 16:52:56 +0000 Message-Id: <20210203165259.13087-6-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210203165259.13087-1-chris@chris-wilson.co.uk> References: <20210203165259.13087-1-chris@chris-wilson.co.uk> MIME-Version: 1.0 Subject: [Intel-gfx] [CI 6/9] drm/i915: Extract request submission from execlists X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" In the process of preparing to reuse the request submission logic for other backends, lift it out of the execlists backend. It already operates on the common structs, so just a matter of moving and renaming. Signed-off-by: Chris Wilson Reviewed-by: Tvrtko Ursulin --- .../drm/i915/gt/intel_execlists_submission.c | 55 +------------ .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 30 +------ drivers/gpu/drm/i915/i915_scheduler.c | 82 +++++++++++++++++++ drivers/gpu/drm/i915/i915_scheduler.h | 2 + 4 files changed, 86 insertions(+), 83 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index f95a35383f48..7b426e461ce3 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -2453,59 +2453,6 @@ static void execlists_preempt(struct timer_list *timer) execlists_kick(timer, preempt); } -static void queue_request(struct intel_engine_cs *engine, - struct i915_request *rq) -{ - GEM_BUG_ON(!list_empty(&rq->sched.link)); - list_add_tail(&rq->sched.link, - i915_sched_lookup_priolist(engine, rq_prio(rq))); - set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); -} - -static bool submit_queue(struct intel_engine_cs *engine, - const struct i915_request *rq) -{ - struct intel_engine_execlists *execlists = &engine->execlists; - - if (rq_prio(rq) <= execlists->queue_priority_hint) - return false; - - execlists->queue_priority_hint = rq_prio(rq); - return true; -} - -static bool ancestor_on_hold(const struct intel_engine_cs *engine, - const struct i915_request *rq) -{ - GEM_BUG_ON(i915_request_on_hold(rq)); - return !list_empty(&engine->active.hold) && hold_request(rq); -} - -static void execlists_submit_request(struct i915_request *request) -{ - struct intel_engine_cs *engine = request->engine; - unsigned long flags; - - /* Will be called from irq-context when using foreign fences. */ - spin_lock_irqsave(&engine->active.lock, flags); - - if (unlikely(ancestor_on_hold(engine, request))) { - RQ_TRACE(request, "ancestor on hold\n"); - list_add_tail(&request->sched.link, &engine->active.hold); - i915_request_set_hold(request); - } else { - queue_request(engine, request); - - GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root)); - GEM_BUG_ON(list_empty(&request->sched.link)); - - if (submit_queue(engine, request)) - __execlists_kick(&engine->execlists); - } - - spin_unlock_irqrestore(&engine->active.lock, flags); -} - static int execlists_context_pre_pin(struct intel_context *ce, struct i915_gem_ww_ctx *ww, void **vaddr) @@ -3105,7 +3052,7 @@ static bool can_preempt(struct intel_engine_cs *engine) static void execlists_set_default_submission(struct intel_engine_cs *engine) { - engine->submit_request = execlists_submit_request; + engine->submit_request = i915_request_enqueue; engine->execlists.tasklet.callback = execlists_submission_tasklet; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 2f5c39091d49..aa34a2f3d6d5 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -520,34 +520,6 @@ static int guc_request_alloc(struct i915_request *request) return 0; } -static inline void queue_request(struct intel_engine_cs *engine, - struct i915_request *rq, - int prio) -{ - GEM_BUG_ON(!list_empty(&rq->sched.link)); - list_add_tail(&rq->sched.link, - i915_sched_lookup_priolist(engine, prio)); - set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); -} - -static void guc_submit_request(struct i915_request *rq) -{ - struct intel_engine_cs *engine = rq->engine; - unsigned long flags; - - /* Will be called from irq-context when using foreign fences. */ - spin_lock_irqsave(&engine->active.lock, flags); - - queue_request(engine, rq, rq_prio(rq)); - - GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root)); - GEM_BUG_ON(list_empty(&rq->sched.link)); - - tasklet_hi_schedule(&engine->execlists.tasklet); - - spin_unlock_irqrestore(&engine->active.lock, flags); -} - static void sanitize_hwsp(struct intel_engine_cs *engine) { struct intel_timeline *tl; @@ -616,7 +588,7 @@ static int guc_resume(struct intel_engine_cs *engine) static void guc_set_default_submission(struct intel_engine_cs *engine) { - engine->submit_request = guc_submit_request; + engine->submit_request = i915_request_enqueue; } static void guc_release(struct intel_engine_cs *engine) diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 9e88417bf451..a56252d3546a 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -460,6 +460,88 @@ void i915_request_set_priority(struct i915_request *rq, int prio) spin_unlock_irqrestore(&engine->active.lock, flags); } +static void queue_request(struct intel_engine_cs *engine, + struct i915_request *rq) +{ + GEM_BUG_ON(!list_empty(&rq->sched.link)); + list_add_tail(&rq->sched.link, + i915_sched_lookup_priolist(engine, rq_prio(rq))); + set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags); +} + +static bool submit_queue(struct intel_engine_cs *engine, + const struct i915_request *rq) +{ + struct intel_engine_execlists *execlists = &engine->execlists; + + if (rq_prio(rq) <= execlists->queue_priority_hint) + return false; + + execlists->queue_priority_hint = rq_prio(rq); + return true; +} + +static bool hold_request(const struct i915_request *rq) +{ + struct i915_dependency *p; + bool result = false; + + /* + * If one of our ancestors is on hold, we must also be put on hold, + * otherwise we will bypass it and execute before it. + */ + rcu_read_lock(); + for_each_signaler(p, rq) { + const struct i915_request *s = + container_of(p->signaler, typeof(*s), sched); + + if (s->engine != rq->engine) + continue; + + result = i915_request_on_hold(s); + if (result) + break; + } + rcu_read_unlock(); + + return result; +} + +static bool ancestor_on_hold(const struct intel_engine_cs *engine, + const struct i915_request *rq) +{ + GEM_BUG_ON(i915_request_on_hold(rq)); + return unlikely(!list_empty(&engine->active.hold)) && hold_request(rq); +} + +void i915_request_enqueue(struct i915_request *rq) +{ + struct intel_engine_cs *engine = rq->engine; + unsigned long flags; + bool kick = false; + + /* Will be called from irq-context when using foreign fences. */ + spin_lock_irqsave(&engine->active.lock, flags); + GEM_BUG_ON(test_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags)); + + if (unlikely(ancestor_on_hold(engine, rq))) { + RQ_TRACE(rq, "ancestor on hold\n"); + list_add_tail(&rq->sched.link, &engine->active.hold); + i915_request_set_hold(rq); + } else { + queue_request(engine, rq); + + GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root)); + + kick = submit_queue(engine, rq); + } + + GEM_BUG_ON(list_empty(&rq->sched.link)); + spin_unlock_irqrestore(&engine->active.lock, flags); + if (kick) + tasklet_hi_schedule(&engine->execlists.tasklet); +} + void i915_sched_node_init(struct i915_sched_node *node) { spin_lock_init(&node->lock); diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h index 2870fa3e089e..89d998f226e0 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.h +++ b/drivers/gpu/drm/i915/i915_scheduler.h @@ -40,6 +40,8 @@ void i915_sched_init_ipi(struct i915_sched_ipi *ipi); void i915_request_set_priority(struct i915_request *request, int prio); +void i915_request_enqueue(struct i915_request *request); + struct list_head * i915_sched_lookup_priolist(struct intel_engine_cs *engine, int prio);