From patchwork Thu Jun 26 17:24:01 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 4429111 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B44059FBBD for ; Thu, 26 Jun 2014 17:25:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EA44120398 for ; Thu, 26 Jun 2014 17:25:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 3DB1520272 for ; Thu, 26 Jun 2014 17:25:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5C3E86E1F9; Thu, 26 Jun 2014 10:25:38 -0700 (PDT) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 43E316E209 for ; Thu, 26 Jun 2014 10:25:27 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 26 Jun 2014 10:25:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.01,554,1400050800"; d="scan'208";a="561434600" Received: from johnharr-linux.iwi.intel.com ([172.28.253.52]) by fmsmga002.fm.intel.com with ESMTP; 26 Jun 2014 10:25:05 -0700 From: John.C.Harrison@Intel.com To: Intel-GFX@lists.freedesktop.org Date: Thu, 26 Jun 2014 18:24:01 +0100 Message-Id: <1403803475-16337-11-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1403803475-16337-1-git-send-email-John.C.Harrison@Intel.com> References: <1403803475-16337-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [RFC 10/44] drm/i915: Prepare retire_requests to handle out-of-order seqnos X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison A major point of the GPU scheduler is that it re-orders batch buffers after they have been submitted to the driver. Rather than attempting to re-assign seqno values, it is much simpler to have each batch buffer keep its initially assigned number and modify the rest of the driver to cope with seqnos being returned out of order. In practice, very little code actually needs updating to cope. One such place is the retire request handler. Rather than stopping as soon as an uncompleted seqno is found, it must now keep iterating through the requests in case later seqnos have completed. There is also a problem with doing the free of the request before the move to inactive. Thus the requests are now moved to a temporary list first, then the objects de-activated and finally the requests on the temporary list are freed. Reviewed-by: Jesse Barnes --- drivers/gpu/drm/i915/i915_gem.c | 60 +++++++++++++++++++++------------------ 1 file changed, 32 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index b784eb2..7e53446 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2602,7 +2602,10 @@ void i915_gem_reset(struct drm_device *dev) void i915_gem_retire_requests_ring(struct intel_engine_cs *ring) { + struct drm_i915_gem_object *obj, *obj_next; + struct drm_i915_gem_request *req, *req_next; uint32_t seqno; + LIST_HEAD(deferred_request_free); if (list_empty(&ring->request_list)) return; @@ -2611,43 +2614,35 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring) seqno = ring->get_seqno(ring, true); - /* Move any buffers on the active list that are no longer referenced - * by the ringbuffer to the flushing/inactive lists as appropriate, - * before we free the context associated with the requests. + /* Note that seqno values might be out of order due to rescheduling and + * pre-emption. Thus both lists must be processed in their entirety + * rather than stopping at the first 'non-passed' entry. */ - while (!list_empty(&ring->active_list)) { - struct drm_i915_gem_object *obj; - - obj = list_first_entry(&ring->active_list, - struct drm_i915_gem_object, - ring_list); - - if (!i915_seqno_passed(seqno, obj->last_read_seqno)) - break; - i915_gem_object_move_to_inactive(obj); - } - - - while (!list_empty(&ring->request_list)) { - struct drm_i915_gem_request *request; - - request = list_first_entry(&ring->request_list, - struct drm_i915_gem_request, - list); - - if (!i915_seqno_passed(seqno, request->seqno)) - break; + list_for_each_entry_safe(req, req_next, &ring->request_list, list) { + if (!i915_seqno_passed(seqno, req->seqno)) + continue; - trace_i915_gem_request_retire(ring, request->seqno); + trace_i915_gem_request_retire(ring, req->seqno); /* We know the GPU must have read the request to have * sent us the seqno + interrupt, so use the position * of tail of the request to update the last known position * of the GPU head. */ - ring->buffer->last_retired_head = request->tail; + ring->buffer->last_retired_head = req->tail; - i915_gem_free_request(request); + list_move_tail(&req->list, &deferred_request_free); + } + + /* Move any buffers on the active list that are no longer referenced + * by the ringbuffer to the flushing/inactive lists as appropriate, + * before we free the context associated with the requests. + */ + list_for_each_entry_safe(obj, obj_next, &ring->active_list, ring_list) { + if (!i915_seqno_passed(seqno, obj->last_read_seqno)) + continue; + + i915_gem_object_move_to_inactive(obj); } if (unlikely(ring->trace_irq_seqno && @@ -2656,6 +2651,15 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring) ring->trace_irq_seqno = 0; } + /* Finish processing active list before freeing request */ + while (!list_empty(&deferred_request_free)) { + req = list_first_entry(&deferred_request_free, + struct drm_i915_gem_request, + list); + + i915_gem_free_request(req); + } + WARN_ON(i915_verify_lists(ring->dev)); }