From patchwork Mon Sep 28 15:25:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 7278571 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id F00C5BEEA4 for ; Mon, 28 Sep 2015 15:25:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C555D2073C for ; Mon, 28 Sep 2015 15:25:42 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id A91EF206E1 for ; Mon, 28 Sep 2015 15:25:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EB3836E9F8; Mon, 28 Sep 2015 08:25:40 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTP id B8D886E9F8 for ; Mon, 28 Sep 2015 08:25:39 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP; 28 Sep 2015 08:25:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,603,1437462000"; d="scan'208";a="814571689" Received: from gaia.fi.intel.com (HELO gaia) ([10.237.72.92]) by fmsmga002.fm.intel.com with ESMTP; 28 Sep 2015 08:25:36 -0700 Received: by gaia (Postfix, from userid 1000) id 6339540578; Mon, 28 Sep 2015 18:25:04 +0300 (EEST) From: Mika Kuoppala To: Chris Wilson , intel-gfx@lists.freedesktop.org In-Reply-To: <1441281700-17814-2-git-send-email-chris@chris-wilson.co.uk> References: <1441281700-17814-1-git-send-email-chris@chris-wilson.co.uk> <1441281700-17814-2-git-send-email-chris@chris-wilson.co.uk> User-Agent: Notmuch/0.20.2+75~gdca7220 (http://notmuchmail.org) Emacs/23.4.1 (i686-pc-linux-gnu) Date: Mon, 28 Sep 2015 18:25:04 +0300 Message-ID: <87612ua7n3.fsf@gaia.fi.intel.com> MIME-Version: 1.0 Subject: Re: [Intel-gfx] [PATCH 2/2] drm/i915: Recover all available ringbuffer space following reset X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi, Chris Wilson writes: > Having flushed all requests from all queues, we know that all > ringbuffers must now be empty. However, since we do not reclaim > all space when retiring the request (to prevent HEADs colliding > with rapid ringbuffer wraparound) the amount of available space > on each ringbuffer upon reset is less than when we start. Do one > more pass over all the ringbuffers to reset the available space > > Signed-off-by: Chris Wilson > Cc: Arun Siluvery > Cc: Mika Kuoppala > Cc: Dave Gordon > --- > drivers/gpu/drm/i915/i915_gem.c | 14 ++++++++++++++ > drivers/gpu/drm/i915/intel_lrc.c | 1 + > drivers/gpu/drm/i915/intel_ringbuffer.c | 13 ++++++++++--- > drivers/gpu/drm/i915/intel_ringbuffer.h | 2 ++ > 4 files changed, 27 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > index 41263cd4170c..3a42c350fec9 100644 > --- a/drivers/gpu/drm/i915/i915_gem.c > +++ b/drivers/gpu/drm/i915/i915_gem.c > @@ -2738,6 +2738,8 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv, > static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, > struct intel_engine_cs *ring) > { > + struct intel_ringbuffer *buffer; > + > while (!list_empty(&ring->active_list)) { > struct drm_i915_gem_object *obj; > > @@ -2783,6 +2785,18 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, > > i915_gem_request_retire(request); > } > + > + /* Having flushed all requests from all queues, we know that all > + * ringbuffers must now be empty. However, since we do not reclaim > + * all space when retiring the request (to prevent HEADs colliding > + * with rapid ringbuffer wraparound) the amount of available space > + * upon reset is less than when we start. Do one more pass over > + * all the ringbuffers to reset last_retired_head. > + */ > + list_for_each_entry(buffer, &ring->buffers, link) { > + buffer->last_retired_head = buffer->tail; > + intel_ring_update_space(buffer); > + } > } > As right after cleaning up the rings in i915_gem_reset(), we have i915_gem_context_reset(). That will go through all contexts and their ringbuffers and set tail and head to zero. If we do the space adjustment in intel_lr_context_reset(), we can avoid adding new ring->buffers list for this purpose: Thanks, --Mika > void i915_gem_reset(struct drm_device *dev) > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c > index 28a712e7d2d0..de52ddc108a7 100644 > --- a/drivers/gpu/drm/i915/intel_lrc.c > +++ b/drivers/gpu/drm/i915/intel_lrc.c > @@ -1881,6 +1881,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin > i915_gem_batch_pool_init(dev, &ring->batch_pool); > init_waitqueue_head(&ring->irq_queue); > > + INIT_LIST_HEAD(&ring->buffers); > INIT_LIST_HEAD(&ring->execlist_queue); > INIT_LIST_HEAD(&ring->execlist_retired_req_list); > spin_lock_init(&ring->execlist_lock); > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c > index 20a75bb516ac..d2e0b3b7efbf 100644 > --- a/drivers/gpu/drm/i915/intel_ringbuffer.c > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c > @@ -2030,10 +2030,14 @@ intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size) > int ret; > > ring = kzalloc(sizeof(*ring), GFP_KERNEL); > - if (ring == NULL) > + if (ring == NULL) { > + DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s\n", > + engine->name); > return ERR_PTR(-ENOMEM); > + } > > ring->ring = engine; > + list_add(&ring->link, &engine->buffers); > > ring->size = size; > /* Workaround an erratum on the i830 which causes a hang if > @@ -2049,8 +2053,9 @@ intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size) > > ret = intel_alloc_ringbuffer_obj(engine->dev, ring); > if (ret) { > - DRM_ERROR("Failed to allocate ringbuffer %s: %d\n", > - engine->name, ret); > + DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s: %d\n", > + engine->name, ret); > + list_del(&ring->link); > kfree(ring); > return ERR_PTR(ret); > } > @@ -2062,6 +2067,7 @@ void > intel_ringbuffer_free(struct intel_ringbuffer *ring) > { > intel_destroy_ringbuffer_obj(ring); > + list_del(&ring->link); > kfree(ring); > } > > @@ -2077,6 +2083,7 @@ static int intel_init_ring_buffer(struct drm_device *dev, > INIT_LIST_HEAD(&ring->active_list); > INIT_LIST_HEAD(&ring->request_list); > INIT_LIST_HEAD(&ring->execlist_queue); > + INIT_LIST_HEAD(&ring->buffers); > i915_gem_batch_pool_init(dev, &ring->batch_pool); > memset(ring->semaphore.sync_seqno, 0, sizeof(ring->semaphore.sync_seqno)); > > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h > index 49fa41dc0eb6..58b1976a7d0a 100644 > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h > @@ -100,6 +100,7 @@ struct intel_ringbuffer { > void __iomem *virtual_start; > > struct intel_engine_cs *ring; > + struct list_head link; > > u32 head; > u32 tail; > @@ -157,6 +158,7 @@ struct intel_engine_cs { > u32 mmio_base; > struct drm_device *dev; > struct intel_ringbuffer *buffer; > + struct list_head buffers; > > /* > * A pool of objects to use as shadow copies of client batch buffers > -- > 2.5.1 diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 256167b..e110d6b 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -2532,15 +2532,16 @@ void intel_lr_context_reset(struct drm_device *dev, WARN(1, "Failed get_pages for context obj\n"); continue; } + + ringbuf->last_retired_head = ringbuf->tail; + intel_ring_update_space(ringbuf); + page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN); reg_state = kmap_atomic(page); - reg_state[CTX_RING_HEAD+1] = 0; - reg_state[CTX_RING_TAIL+1] = 0; + reg_state[CTX_RING_HEAD+1] = ringbuf->head; + reg_state[CTX_RING_TAIL+1] = ringbuf->tail; kunmap_atomic(reg_state); - - ringbuf->head = 0; - ringbuf->tail = 0; }