From patchwork Thu Aug 9 15:11:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10561529 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E8BDD14C0 for ; Thu, 9 Aug 2018 15:11:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6D442B5EC for ; Thu, 9 Aug 2018 15:11:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D502D2B5F3; Thu, 9 Aug 2018 15:11:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7A5E02B55F for ; Thu, 9 Aug 2018 15:11:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 99DBD6E790; Thu, 9 Aug 2018 15:11:19 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5D6046E794 for ; Thu, 9 Aug 2018 15:11:18 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12753334-1500050 for multiple; Thu, 09 Aug 2018 16:11:09 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Thu, 9 Aug 2018 16:11:07 +0100 Message-Id: <20180809151107.4188-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180809130701.31445-1-chris@chris-wilson.co.uk> References: <20180809130701.31445-1-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH v2] drm/i915/execlists: Avoid kicking priority on the current context X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP If the request is currently on the HW (in port 0), then we do not need to kick the submission tasklet to evaluate whether we should be preempting itself in order to execute it again. In the case that was annoying me: execlists_schedule: rq(18:211173).prio=0 -> 2 need_preempt: last(18:211174).prio=0, queue.prio=2 We are bumping the priority of the first of a pair of requests running in the current context. Then when evaluating preempt, we would see that that our priority request is higher than the last executing request in ELSP0 and so trigger preemption, not realising that our intended request was already executing. v2: As we assume state of the execlists->port[] that is only valid while we hold the timeline lock we have to repeat some earlier tests that on the validity of the node. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- drivers/gpu/drm/i915/intel_lrc.c | 36 ++++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index e5385dbfcdda..2257bc7b167f 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1238,9 +1238,13 @@ static void execlists_schedule(struct i915_request *request, engine = sched_lock_engine(node, engine); + /* Recheck after acquiring the engine->timeline.lock */ if (prio <= node->attr.priority) continue; + if (i915_sched_node_signaled(node)) + continue; + node->attr.priority = prio; if (!list_empty(&node->link)) { if (last != engine) { @@ -1249,14 +1253,34 @@ static void execlists_schedule(struct i915_request *request, } GEM_BUG_ON(pl->priority != prio); list_move_tail(&node->link, &pl->requests); + } else { + /* + * If the request is not in the priolist queue because + * it is not yet runnable, then it doesn't contribute + * to our preemption decisions. On the other hand, + * if the request is on the HW, it too is not in the + * queue; but in that case we may still need to reorder + * the inflight requests. + */ + if (!i915_sw_fence_done(&sched_to_request(node)->submit)) + continue; } - if (prio > engine->execlists.queue_priority && - i915_sw_fence_done(&sched_to_request(node)->submit)) { - /* defer submission until after all of our updates */ - __update_queue(engine, prio); - tasklet_hi_schedule(&engine->execlists.tasklet); - } + if (prio <= engine->execlists.queue_priority) + continue; + + /* + * If we are already the currently executing context, don't + * bother evaluating if we should preempt ourselves. + */ + if (sched_to_request(node)->global_seqno && + i915_seqno_passed(port_request(engine->execlists.port)->global_seqno, + sched_to_request(node)->global_seqno)) + continue; + + /* Defer (tasklet) submission until after all of our updates. */ + __update_queue(engine, prio); + tasklet_hi_schedule(&engine->execlists.tasklet); } spin_unlock_irq(&engine->timeline.lock);