From patchwork Mon Jan 11 09:17:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8000611 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6EF199F32E for ; Mon, 11 Jan 2016 09:23:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 53EC520204 for ; Mon, 11 Jan 2016 09:23:00 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id E776120256 for ; Mon, 11 Jan 2016 09:22:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ACDB26E27D; Mon, 11 Jan 2016 01:22:57 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-f68.google.com (mail-wm0-f68.google.com [74.125.82.68]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1D7136E24F for ; Mon, 11 Jan 2016 01:20:48 -0800 (PST) Received: by mail-wm0-f68.google.com with SMTP id l65so25314346wmf.3 for ; Mon, 11 Jan 2016 01:20:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=S8lIUXrbh4bviGp+rtdM0q+gBq6XNZcNdDP9ydrbCGk=; b=cKR/+HqXiU0gkUfmAxlwEA4IHjhAo+cyEDwlwYFXJZ1L4+EXAFy+9aS2n4BXBMuLts fJMBQ41piw1NcXFOK5F/5Xlj3g3WMnCxdY7JPONMtm8vJxVUO0ye6IrA1ZBprlIHxJ4x jUf8KNenZopcoO5D2GBhkmnjPBBDD+kTSmM114R+PzERY69I5xVJavRWOFwRet+g4xxd J0NmKh+t7TvAYTWOgsvy+7uEjmFH+v64v/v1HHCE+sVw75WfSdK6b7+hGScg2dRAYhsm R88ZAIgLzI+p2IwfcTWowVMWk61C0qIJbTrCiW3ecPoJaxZFOxhivjDGcbhqJQPBAAD2 EK3g== X-Received: by 10.194.91.210 with SMTP id cg18mr78803328wjb.117.1452504046879; Mon, 11 Jan 2016 01:20:46 -0800 (PST) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id v2sm11834679wmv.12.2016.01.11.01.20.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 11 Jan 2016 01:20:46 -0800 (PST) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 11 Jan 2016 09:17:12 +0000 Message-Id: <1452503961-14837-61-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1452503961-14837-1-git-send-email-chris@chris-wilson.co.uk> References: <1452503961-14837-1-git-send-email-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH 061/190] drm/i915: Rename intel_context[engine].ringbuf X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Perform s/ringbuf/ring/ on the context struct for consistency with the ring/engine split. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_debugfs.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/i915_guc_submission.c | 6 +-- drivers/gpu/drm/i915/intel_lrc.c | 63 ++++++++++++++---------------- 4 files changed, 35 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 018076c89247..6e91726db8d3 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -1988,7 +1988,7 @@ static int i915_context_status(struct seq_file *m, void *unused) struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state; struct intel_ringbuffer *ringbuf = - ctx->engine[i].ringbuf; + ctx->engine[i].ring; seq_printf(m, "%s: ", ring->name); if (ctx_obj) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index baede4517c70..9f06dd19bfb2 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -885,7 +885,7 @@ struct intel_context { /* Execlists */ struct { struct drm_i915_gem_object *state; - struct intel_ringbuffer *ringbuf; + struct intel_ringbuffer *ring; int pin_count; } engine[I915_NUM_RINGS]; diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c index 53abe2143f8a..b47e630e048a 100644 --- a/drivers/gpu/drm/i915/i915_guc_submission.c +++ b/drivers/gpu/drm/i915/i915_guc_submission.c @@ -390,7 +390,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc, for (i = 0; i < I915_NUM_RINGS; i++) { struct guc_execlist_context *lrc = &desc.lrc[i]; - struct intel_ringbuffer *ringbuf = ctx->engine[i].ringbuf; + struct intel_ringbuffer *ring = ctx->engine[i].ring; struct intel_engine_cs *engine; struct drm_i915_gem_object *obj; uint64_t ctx_desc; @@ -406,7 +406,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc, if (!obj) break; /* XXX: continue? */ - engine = ringbuf->engine; + engine = ring->engine; ctx_desc = intel_lr_context_descriptor(ctx, engine); lrc->context_desc = (u32)ctx_desc; @@ -416,7 +416,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc, lrc->context_id = (client->ctx_index << GUC_ELC_CTXID_OFFSET) | (engine->id << GUC_ELC_ENGINE_OFFSET); - obj = ringbuf->obj; + obj = ring->obj; lrc->ring_begin = i915_gem_obj_ggtt_offset(obj); lrc->ring_end = lrc->ring_begin + obj->base.size - 1; diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 8639ebfab96f..65beb7267d1a 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -402,24 +402,24 @@ static void execlists_submit_requests(struct drm_i915_gem_request *rq0, execlists_elsp_write(rq0, rq1); } -static void execlists_context_unqueue(struct intel_engine_cs *ring) +static void execlists_context_unqueue(struct intel_engine_cs *engine) { struct drm_i915_gem_request *req0 = NULL, *req1 = NULL; struct drm_i915_gem_request *cursor = NULL, *tmp = NULL; - assert_spin_locked(&ring->execlist_lock); + assert_spin_locked(&engine->execlist_lock); /* * If irqs are not active generate a warning as batches that finish * without the irqs may get lost and a GPU Hang may occur. */ - WARN_ON(!intel_irqs_enabled(ring->dev->dev_private)); + WARN_ON(!intel_irqs_enabled(engine->dev->dev_private)); - if (list_empty(&ring->execlist_queue)) + if (list_empty(&engine->execlist_queue)) return; /* Try to read in pairs */ - list_for_each_entry_safe(cursor, tmp, &ring->execlist_queue, + list_for_each_entry_safe(cursor, tmp, &engine->execlist_queue, execlist_link) { if (!req0) { req0 = cursor; @@ -429,7 +429,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring) cursor->elsp_submitted = req0->elsp_submitted; list_del(&req0->execlist_link); list_add_tail(&req0->execlist_link, - &ring->execlist_retired_req_list); + &engine->execlist_retired_req_list); req0 = cursor; } else { req1 = cursor; @@ -437,7 +437,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring) } } - if (IS_GEN8(ring->dev) || IS_GEN9(ring->dev)) { + if (IS_GEN8(engine->dev) || IS_GEN9(engine->dev)) { /* * WaIdleLiteRestore: make sure we never cause a lite * restore with HEAD==TAIL @@ -449,11 +449,11 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring) * for where we prepare the padding after the end of the * request. */ - struct intel_ringbuffer *ringbuf; + struct intel_ringbuffer *ring; - ringbuf = req0->ctx->engine[ring->id].ringbuf; + ring = req0->ctx->engine[engine->id].ring; req0->tail += 8; - req0->tail &= ringbuf->size - 1; + req0->tail &= ring->size - 1; } } @@ -671,7 +671,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request { int ret; - request->ring = request->ctx->engine[request->engine->id].ringbuf; + request->ring = request->ctx->engine[request->engine->id].ring; if (request->ctx != request->engine->default_context) { ret = intel_lr_context_pin(request); @@ -1775,7 +1775,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin ret = intel_lr_context_do_pin( ring, ring->default_context->engine[ring->id].state, - ring->default_context->engine[ring->id].ringbuf); + ring->default_context->engine[ring->id].ring); if (ret) { DRM_ERROR( "Failed to pin and map ringbuffer %s: %d\n", @@ -2177,16 +2177,15 @@ void intel_lr_context_free(struct intel_context *ctx) struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state; if (ctx_obj) { - struct intel_ringbuffer *ringbuf = - ctx->engine[i].ringbuf; - struct intel_engine_cs *engine = ringbuf->engine; + struct intel_ringbuffer *ring = ctx->engine[i].ring; + struct intel_engine_cs *engine = ring->engine; if (ctx == engine->default_context) { - intel_unpin_ringbuffer_obj(ringbuf); + intel_unpin_ringbuffer_obj(ring); i915_gem_object_ggtt_unpin(ctx_obj); } WARN_ON(ctx->engine[engine->id].pin_count); - intel_ringbuffer_free(ringbuf); + intel_ringbuffer_free(ring); drm_gem_object_unreference(&ctx_obj->base); } } @@ -2266,7 +2265,7 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx, { struct drm_i915_gem_object *ctx_obj; uint32_t context_size; - struct intel_ringbuffer *ringbuf; + struct intel_ringbuffer *ring; int ret; WARN_ON(ctx->legacy_hw_ctx.rcs_state != NULL); @@ -2283,19 +2282,19 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx, return -ENOMEM; } - ringbuf = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE); - if (IS_ERR(ringbuf)) { - ret = PTR_ERR(ringbuf); + ring = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE); + if (IS_ERR(ring)) { + ret = PTR_ERR(ring); goto error_deref_obj; } - ret = populate_lr_context(ctx, ctx_obj, engine, ringbuf); + ret = populate_lr_context(ctx, ctx_obj, engine, ring); if (ret) { DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret); goto error_ringbuf; } - ctx->engine[engine->id].ringbuf = ringbuf; + ctx->engine[engine->id].ring = ring; ctx->engine[engine->id].state = ctx_obj; if (ctx != engine->default_context && engine->init_context) { @@ -2320,10 +2319,10 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx, return 0; error_ringbuf: - intel_ringbuffer_free(ringbuf); + intel_ringbuffer_free(ring); error_deref_obj: drm_gem_object_unreference(&ctx_obj->base); - ctx->engine[engine->id].ringbuf = NULL; + ctx->engine[engine->id].ring = NULL; ctx->engine[engine->id].state = NULL; return ret; } @@ -2332,14 +2331,12 @@ void intel_lr_context_reset(struct drm_device *dev, struct intel_context *ctx) { struct drm_i915_private *dev_priv = dev->dev_private; - struct intel_engine_cs *ring; + struct intel_engine_cs *unused; int i; - for_each_ring(ring, dev_priv, i) { - struct drm_i915_gem_object *ctx_obj = - ctx->engine[ring->id].state; - struct intel_ringbuffer *ringbuf = - ctx->engine[ring->id].ringbuf; + for_each_ring(unused, dev_priv, i) { + struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state; + struct intel_ringbuffer *ring = ctx->engine[i].ring; uint32_t *reg_state; struct page *page; @@ -2358,7 +2355,7 @@ void intel_lr_context_reset(struct drm_device *dev, kunmap_atomic(reg_state); - ringbuf->head = 0; - ringbuf->tail = 0; + ring->head = 0; + ring->tail = 0; } }