From patchwork Mon Jan 11 11:01:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8002181 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D7B429F32E for ; Mon, 11 Jan 2016 11:03:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BD7CC20295 for ; Mon, 11 Jan 2016 11:03:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 89C49202FE for ; Mon, 11 Jan 2016 11:03:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 30BB889452; Mon, 11 Jan 2016 03:03:16 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-f68.google.com (mail-wm0-f68.google.com [74.125.82.68]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6A0456E46E for ; Mon, 11 Jan 2016 03:02:41 -0800 (PST) Received: by mail-wm0-f68.google.com with SMTP id u188so25686498wmu.0 for ; Mon, 11 Jan 2016 03:02:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=v0wNjt+N2NgUTM84NsIAVnopoFrKGWoKr5LX43JlSLM=; b=Bj+4e1tgJ9O4YW+ttEegzaYAo6tMEjC+r2Rvd0TNSN0295m8EmhtY3T+7KajJ18hV5 E3lwQfvDGuUB5mThZtKDET0MpEIRsuU10aHidS9sZfCoAFkcdsH+49/oZDYAQ3B07YJy 9peSJ8ZuzuvJ8DC9demO+cHhrFnx26ljBMm5nntyaVp1z4g/9S5DuAoBDLE6w4s7QQ9d p83KOaOL8wNh2SE2NK4WdrDztFKbWtEod05J4yXcjaKex0wnNmNNWef6ofxxleDhE1AY ozq6FPUNVFZZaTtEYU6pg1JGtQEWhh80KrVtaM0QZosaRuLbwfU61LfB1Owdt0cIwWxZ dpWA== X-Received: by 10.28.125.20 with SMTP id y20mr12944617wmc.19.1452510160155; Mon, 11 Jan 2016 03:02:40 -0800 (PST) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id 73sm12311579wmm.7.2016.01.11.03.02.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 11 Jan 2016 03:02:38 -0800 (PST) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 11 Jan 2016 11:01:29 +0000 Message-Id: <1452510091-6833-47-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1452510091-6833-1-git-send-email-chris@chris-wilson.co.uk> References: <1452503961-14837-1-git-send-email-chris@chris-wilson.co.uk> <1452510091-6833-1-git-send-email-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH 188/190] drm/i915: Use VMA for ringbuffer tracking X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use the GGTT VMA as the primary cookie for handing ring objects as the most common action upon the ring is mapping and unmapping which act upon the VMA itself. By restructuring the code to work with the ring VMA, we can shrink the code and remove a few cycles from context pinning. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_debugfs.c | 2 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 135 ++++++++++++++------------------ drivers/gpu/drm/i915/intel_ringbuffer.h | 2 +- 3 files changed, 61 insertions(+), 78 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 7fb4088b3966..af2ec70dd7ab 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -379,7 +379,7 @@ static int per_file_ctx_stats(int id, void *ptr, void *data) if (ctx->engine[n].state) per_file_stats(0, ctx->engine[n].state->obj, data); if (ctx->engine[n].ring) - per_file_stats(0, ctx->engine[n].ring->obj, data); + per_file_stats(0, ctx->engine[n].ring->vma->obj, data); } return 0; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 41c52cdcbe4a..512841df2527 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -1899,108 +1899,91 @@ static int init_phys_status_page(struct intel_engine_cs *ring) int intel_ring_map(struct intel_ring *ring) { - struct drm_i915_gem_object *obj = ring->obj; - struct i915_vma *vma; + void *ptr; int ret; - if (HAS_LLC(ring->engine->i915) && !obj->stolen) { - vma = i915_gem_object_ggtt_pin(obj, NULL, - 0, PAGE_SIZE, - PIN_HIGH); - if (IS_ERR(vma)) - return PTR_ERR(vma); + GEM_BUG_ON(ring->virtual_start); - ret = i915_gem_object_set_to_cpu_domain(obj, true); - if (ret) - goto unpin; - - ring->virtual_start = i915_gem_object_pin_vmap(obj); - if (IS_ERR(ring->virtual_start)) { - ret = PTR_ERR(ring->virtual_start); - ring->virtual_start = NULL; - goto unpin; - } - } else { - vma = i915_gem_object_ggtt_pin(obj, NULL, - 0, PAGE_SIZE, - PIN_MAPPABLE); - if (IS_ERR(vma)) - return PTR_ERR(vma); + ret = i915_vma_pin(ring->vma, 0, PAGE_SIZE, + PIN_GLOBAL | (ring->vmap ? PIN_HIGH : PIN_MAPPABLE)); + if (unlikely(ret)) + return ret; - ret = i915_gem_object_set_to_gtt_domain(obj, true); - if (ret) - goto unpin; - - ring->virtual_start = ioremap_wc(ring->engine->i915->gtt.mappable_base + - vma->node.start, - ring->size); - if (ring->virtual_start == NULL) { - ret = -ENOMEM; - goto unpin; - } + if (ring->vmap) + ptr = i915_gem_object_pin_vmap(ring->vma->obj); + else + ptr = i915_vma_iomap(ring->engine->i915, ring->vma); + if (IS_ERR(ptr)) { + i915_vma_unpin(ring->vma); + return PTR_ERR(ptr); } - ring->vma = vma; + ring->virtual_start = ptr; return 0; - -unpin: - i915_vma_unpin(vma); - return ret; } void intel_ring_unmap(struct intel_ring *ring) { - if (HAS_LLC(ring->engine->i915) && !ring->obj->stolen) - i915_gem_object_unpin_vmap(ring->obj); - else - iounmap(ring->virtual_start); + GEM_BUG_ON(ring->virtual_start == NULL); - i915_vma_unpin(ring->vma); - ring->vma = NULL; -} + if (ring->vmap) + i915_gem_object_unpin_vmap(ring->vma->obj); + ring->virtual_start = NULL; -static void intel_destroy_ringbuffer_obj(struct intel_ring *ringbuf) -{ - __i915_gem_object_release_unless_active(ringbuf->obj); - ringbuf->obj = NULL; + i915_vma_unpin(ring->vma); } -static int intel_alloc_ringbuffer_obj(struct drm_device *dev, - struct intel_ring *ringbuf) +static struct i915_vma * +intel_ring_create_vma(struct drm_device *dev, int size) { struct drm_i915_gem_object *obj; + struct i915_vma *vma; + int ret; obj = NULL; if (!HAS_LLC(dev)) - obj = i915_gem_object_create_stolen(dev, ringbuf->size); + obj = i915_gem_object_create_stolen(dev, size); if (obj == NULL) - obj = i915_gem_alloc_object(dev, ringbuf->size); + obj = i915_gem_alloc_object(dev, size); if (obj == NULL) - return -ENOMEM; + return ERR_PTR(-ENOMEM); /* mark ring buffers as read-only from GPU side by default */ obj->gt_ro = 1; - ringbuf->obj = obj; + if (HAS_LLC(dev) && !obj->stolen) + ret = i915_gem_object_set_to_cpu_domain(obj, true); + else + ret = i915_gem_object_set_to_gtt_domain(obj, true); + if (ret) { + vma = ERR_PTR(ret); + goto err; + } + + vma = i915_gem_obj_lookup_or_create_vma(obj, + &to_i915(dev)->gtt.base, + NULL); + if (IS_ERR(vma)) + goto err; + + return vma; - return 0; +err: + drm_gem_object_unreference(&obj->base); + return vma; } struct intel_ring * intel_engine_create_ring(struct intel_engine_cs *engine, int size) { struct intel_ring *ring; - int ret; + struct i915_vma *vma; ring = kzalloc(sizeof(*ring), GFP_KERNEL); - if (ring == NULL) { - DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s\n", - engine->name); + if (ring == NULL) return ERR_PTR(-ENOMEM); - } ring->engine = engine; - list_add(&ring->link, &engine->buffers); ring->size = size; /* Workaround an erratum on the i830 which causes a hang if @@ -2008,28 +1991,29 @@ intel_engine_create_ring(struct intel_engine_cs *engine, int size) * of the buffer. */ ring->effective_size = size; - if (IS_I830(engine->dev) || IS_845G(engine->dev)) + if (IS_I830(engine->i915) || IS_845G(engine->i915)) ring->effective_size -= 2 * CACHELINE_BYTES; ring->last_retired_head = -1; intel_ring_update_space(ring); - ret = intel_alloc_ringbuffer_obj(engine->dev, ring); - if (ret) { - DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s: %d\n", - engine->name, ret); - list_del(&ring->link); + vma = intel_ring_create_vma(engine->dev, size); + if (IS_ERR(vma)) { kfree(ring); - return ERR_PTR(ret); + return ERR_CAST(vma); } + ring->vma = vma; + if (HAS_LLC(engine->i915) && !vma->obj->stolen) + ring->vmap = true; + list_add(&ring->link, &engine->buffers); return ring; } void intel_ring_free(struct intel_ring *ring) { - intel_destroy_ringbuffer_obj(ring); + __i915_gem_object_release_unless_active(ring->vma->obj); list_del(&ring->link); kfree(ring); } @@ -2058,7 +2042,6 @@ static int intel_init_engine(struct drm_device *dev, ret = PTR_ERR(ringbuf); goto error; } - engine->buffer = ringbuf; if (I915_NEED_GFX_HWS(dev)) { ret = init_status_page(engine); @@ -2073,12 +2056,12 @@ static int intel_init_engine(struct drm_device *dev, ret = intel_ring_map(ringbuf); if (ret) { - DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n", - engine->name, ret); - intel_destroy_ringbuffer_obj(ringbuf); + intel_ring_free(ringbuf); goto error; } + engine->buffer = ringbuf; + ret = i915_cmd_parser_init_ring(engine); if (ret) goto error; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index d24d0e438f49..3ae941b338ca 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -95,7 +95,6 @@ struct intel_engine_hangcheck { }; struct intel_ring { - struct drm_i915_gem_object *obj; struct i915_vma *vma; void *virtual_start; @@ -110,6 +109,7 @@ struct intel_ring { int reserved_size; int reserved_tail; bool reserved_in_use; + bool vmap; /** We track the position of the requests in the ring buffer, and * when each is retired we increment last_retired_head as the GPU