From patchwork Tue Apr 26 20:06:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8944261 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C3F269F441 for ; Tue, 26 Apr 2016 20:06:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F26E5200FE for ; Tue, 26 Apr 2016 20:06:56 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 7977920138 for ; Tue, 26 Apr 2016 20:06:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 774866E93D; Tue, 26 Apr 2016 20:06:50 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x243.google.com (mail-wm0-x243.google.com [IPv6:2a00:1450:400c:c09::243]) by gabe.freedesktop.org (Postfix) with ESMTPS id D0DE26E934 for ; Tue, 26 Apr 2016 20:06:47 +0000 (UTC) Received: by mail-wm0-x243.google.com with SMTP id n3so8225394wmn.1 for ; Tue, 26 Apr 2016 13:06:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=b6l12yEgrV9mY60pdC9G2nLzYMoahoHjzgJahK1P7X4=; b=zUuxg1C6UKEEUPlCQ+wEafz3Pn4bQo7KHAA1MHCfnKWR55gGG59Cp/a5gAdD7vzAg1 BZsdBlLo9e2b1059A4/JR6EqRVqwLo5s/3Qc9fdvEORj9JhvMugYc1JeaxMbej5HKB4v 29II51r67PywXuT922IuRJ9ucDYr65crbgu5s8eV/gGCH2UBs+rDCx3HVse4bBfaQDNK Rf0TpcEejC5Sxtf59ku7Wd1l1YANc6NkiP2SQyVB43Cwn/yb4pwLgvL77QaFZXCIsEUJ ZfZA1XxOQpEq4PPJVWCq3k5qo570nZlCyniuidhXFEZNlm2OPdhm2l8Zk78omLgIKCrz F0Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=b6l12yEgrV9mY60pdC9G2nLzYMoahoHjzgJahK1P7X4=; b=exfS5OLrcg8+XNkO2Az+dmC8r0tpzvd0IVEBaF3NoM9zSTO63uFynElnL66ID3AoX9 8mcmFGJfoYYkeEFB83BqFr8xTMO7aLOOVj/UK83e9gx1HXQwzgdxYlogEOsAFJtxW3YA 8zE+fCj9A3cVEF/4qliLC8Z0JiAevxKfG5gQISST1vm5HvaW2K+DiDHJI2M70i3V78o1 VXAamGxAHuSvw/Wy1KJMDzToLZbmsf/MKm8T6daT0/e4dC0vU9pC9leBDJoMNlQFOUwq DXL8Y07CTM6zjgF1beT+e8cJFDvxoXhynndMtGl6ZT0u3iBOIT3t9EZVXa9E/aLkakyM dEng== X-Gm-Message-State: AOPr4FX3zN9ewt2qNVmk+CfWQkSoz9OKGTdUREozwoSuTNvNlqH8pnPZM3q7TNzEsclK0g== X-Received: by 10.194.116.103 with SMTP id jv7mr5491588wjb.22.1461701204856; Tue, 26 Apr 2016 13:06:44 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id gk4sm347603wjd.7.2016.04.26.13.06.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Apr 2016 13:06:43 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Tue, 26 Apr 2016 21:06:07 +0100 Message-Id: <1461701180-895-13-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1461701180-895-1-git-send-email-chris@chris-wilson.co.uk> References: <1461701180-895-1-git-send-email-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH v6 12/25] drm/i915: Unify intel_ring_begin() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Combine the near identical implementations of intel_logical_ring_begin() and intel_ring_begin() - the only difference is that the logical wait has to check for a matching ring (which is assumed by legacy). Signed-off-by: Chris Wilson Cc: Joonas Lahtinen Cc: Tvrtko Ursulin Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/i915/intel_lrc.c | 146 ++----------------------- drivers/gpu/drm/i915/intel_lrc.h | 1 - drivers/gpu/drm/i915/intel_mocs.c | 12 +-- drivers/gpu/drm/i915/intel_ringbuffer.c | 182 ++++++++++++-------------------- drivers/gpu/drm/i915/intel_ringbuffer.h | 3 - 5 files changed, 81 insertions(+), 263 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 2b7e6bbc017b..ba87c94928e7 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -721,48 +721,6 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request return ret; } -static int logical_ring_wait_for_space(struct drm_i915_gem_request *req, - int bytes) -{ - struct intel_ringbuffer *ringbuf = req->ringbuf; - struct intel_engine_cs *engine = req->engine; - struct drm_i915_gem_request *target; - unsigned space; - int ret; - - if (intel_ring_space(ringbuf) >= bytes) - return 0; - - /* The whole point of reserving space is to not wait! */ - WARN_ON(ringbuf->reserved_in_use); - - list_for_each_entry(target, &engine->request_list, list) { - /* - * The request queue is per-engine, so can contain requests - * from multiple ringbuffers. Here, we must ignore any that - * aren't from the ringbuffer we're considering. - */ - if (target->ringbuf != ringbuf) - continue; - - /* Would completion of this request free enough space? */ - space = __intel_ring_space(target->postfix, ringbuf->tail, - ringbuf->size); - if (space >= bytes) - break; - } - - if (WARN_ON(&target->list == &engine->request_list)) - return -ENOSPC; - - ret = i915_wait_request(target); - if (ret) - return ret; - - ringbuf->space = space; - return 0; -} - /* * intel_logical_ring_advance_and_submit() - advance the tail and submit the workload * @request: Request to advance the logical ringbuffer of. @@ -814,92 +772,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) return 0; } -static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf) -{ - uint32_t __iomem *virt; - int rem = ringbuf->size - ringbuf->tail; - - virt = ringbuf->virtual_start + ringbuf->tail; - rem /= 4; - while (rem--) - iowrite32(MI_NOOP, virt++); - - ringbuf->tail = 0; - intel_ring_update_space(ringbuf); -} - -static int logical_ring_prepare(struct drm_i915_gem_request *req, int bytes) -{ - struct intel_ringbuffer *ringbuf = req->ringbuf; - int remain_usable = ringbuf->effective_size - ringbuf->tail; - int remain_actual = ringbuf->size - ringbuf->tail; - int ret, total_bytes, wait_bytes = 0; - bool need_wrap = false; - - if (ringbuf->reserved_in_use) - total_bytes = bytes; - else - total_bytes = bytes + ringbuf->reserved_size; - - if (unlikely(bytes > remain_usable)) { - /* - * Not enough space for the basic request. So need to flush - * out the remainder and then wait for base + reserved. - */ - wait_bytes = remain_actual + total_bytes; - need_wrap = true; - } else { - if (unlikely(total_bytes > remain_usable)) { - /* - * The base request will fit but the reserved space - * falls off the end. So don't need an immediate wrap - * and only need to effectively wait for the reserved - * size space from the start of ringbuffer. - */ - wait_bytes = remain_actual + ringbuf->reserved_size; - } else if (total_bytes > ringbuf->space) { - /* No wrapping required, just waiting. */ - wait_bytes = total_bytes; - } - } - - if (wait_bytes) { - ret = logical_ring_wait_for_space(req, wait_bytes); - if (unlikely(ret)) - return ret; - - if (need_wrap) - __wrap_ring_buffer(ringbuf); - } - - return 0; -} - -/** - * intel_logical_ring_begin() - prepare the logical ringbuffer to accept some commands - * - * @req: The request to start some new work for - * @num_dwords: number of DWORDs that we plan to write to the ringbuffer. - * - * The ringbuffer might not be ready to accept the commands right away (maybe it needs to - * be wrapped, or wait a bit for the tail to be updated). This function takes care of that - * and also preallocates a request (every workload submission is still mediated through - * requests, same as it did with legacy ringbuffer submission). - * - * Return: non-zero if the ringbuffer is not ready to be written to. - */ -int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords) -{ - int ret; - - ret = logical_ring_prepare(req, num_dwords * sizeof(uint32_t)); - if (ret) - return ret; - - req->ringbuf->space -= num_dwords * sizeof(uint32_t); - return 0; -} - int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request) { /* @@ -912,7 +784,7 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request) */ intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST); - return intel_logical_ring_begin(request, 0); + return intel_ring_begin(request, 0); } /** @@ -982,7 +854,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params, if (engine == &dev_priv->engine[RCS] && instp_mode != dev_priv->relative_constants_mode) { - ret = intel_logical_ring_begin(params->request, 4); + ret = intel_ring_begin(params->request, 4); if (ret) return ret; @@ -1178,7 +1050,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req) if (ret) return ret; - ret = intel_logical_ring_begin(req, w->count * 2 + 2); + ret = intel_ring_begin(req, w->count * 2 + 2); if (ret) return ret; @@ -1669,7 +1541,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req) const int num_lri_cmds = GEN8_LEGACY_PDPES * 2; int i, ret; - ret = intel_logical_ring_begin(req, num_lri_cmds * 2 + 2); + ret = intel_ring_begin(req, num_lri_cmds * 2 + 2); if (ret) return ret; @@ -1716,7 +1588,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req, req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine); } - ret = intel_logical_ring_begin(req, 4); + ret = intel_ring_begin(req, 4); if (ret) return ret; @@ -1778,7 +1650,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request, uint32_t cmd; int ret; - ret = intel_logical_ring_begin(request, 4); + ret = intel_ring_begin(request, 4); if (ret) return ret; @@ -1846,7 +1718,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request, vf_flush_wa = true; } - ret = intel_logical_ring_begin(request, vf_flush_wa ? 12 : 6); + ret = intel_ring_begin(request, vf_flush_wa ? 12 : 6); if (ret) return ret; @@ -1920,7 +1792,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request) struct intel_ringbuffer *ringbuf = request->ringbuf; int ret; - ret = intel_logical_ring_begin(request, 6 + WA_TAIL_DWORDS); + ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS); if (ret) return ret; @@ -1944,7 +1816,7 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request) struct intel_ringbuffer *ringbuf = request->ringbuf; int ret; - ret = intel_logical_ring_begin(request, 8 + WA_TAIL_DWORDS); + ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS); if (ret) return ret; diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h index 461f1ef9b5c1..60a7385bc531 100644 --- a/drivers/gpu/drm/i915/intel_lrc.h +++ b/drivers/gpu/drm/i915/intel_lrc.h @@ -63,7 +63,6 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request); void intel_logical_ring_stop(struct intel_engine_cs *engine); void intel_logical_ring_cleanup(struct intel_engine_cs *engine); int intel_logical_rings_init(struct drm_device *dev); -int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords); int logical_ring_flush_all_caches(struct drm_i915_gem_request *req); /** diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c index 23b8545ad6b0..6ba4bf7f2a89 100644 --- a/drivers/gpu/drm/i915/intel_mocs.c +++ b/drivers/gpu/drm/i915/intel_mocs.c @@ -239,11 +239,9 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req, if (WARN_ON(table->size > GEN9_NUM_MOCS_ENTRIES)) return -ENODEV; - ret = intel_logical_ring_begin(req, 2 + 2 * GEN9_NUM_MOCS_ENTRIES); - if (ret) { - DRM_DEBUG("intel_logical_ring_begin failed %d\n", ret); + ret = intel_ring_begin(req, 2 + 2 * GEN9_NUM_MOCS_ENTRIES); + if (ret) return ret; - } intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES)); @@ -305,11 +303,9 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req, if (WARN_ON(table->size > GEN9_NUM_MOCS_ENTRIES)) return -ENODEV; - ret = intel_logical_ring_begin(req, 2 + GEN9_NUM_MOCS_ENTRIES); - if (ret) { - DRM_DEBUG("intel_logical_ring_begin failed %d\n", ret); + ret = intel_ring_begin(req, 2 + GEN9_NUM_MOCS_ENTRIES); + if (ret) return ret; - } intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2)); diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 61c120aed11e..1d3d2ea3e9bc 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -53,12 +53,6 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf) ringbuf->tail, ringbuf->size); } -int intel_ring_space(struct intel_ringbuffer *ringbuf) -{ - intel_ring_update_space(ringbuf); - return ringbuf->space; -} - bool intel_engine_stopped(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->dev->dev_private; @@ -2318,51 +2312,6 @@ void intel_cleanup_engine(struct intel_engine_cs *engine) engine->dev = NULL; } -static int ring_wait_for_space(struct intel_engine_cs *engine, int n) -{ - struct intel_ringbuffer *ringbuf = engine->buffer; - struct drm_i915_gem_request *request; - unsigned space; - int ret; - - if (intel_ring_space(ringbuf) >= n) - return 0; - - /* The whole point of reserving space is to not wait! */ - WARN_ON(ringbuf->reserved_in_use); - - list_for_each_entry(request, &engine->request_list, list) { - space = __intel_ring_space(request->postfix, ringbuf->tail, - ringbuf->size); - if (space >= n) - break; - } - - if (WARN_ON(&request->list == &engine->request_list)) - return -ENOSPC; - - ret = i915_wait_request(request); - if (ret) - return ret; - - ringbuf->space = space; - return 0; -} - -static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf) -{ - uint32_t __iomem *virt; - int rem = ringbuf->size - ringbuf->tail; - - virt = ringbuf->virtual_start + ringbuf->tail; - rem /= 4; - while (rem--) - iowrite32(MI_NOOP, virt++); - - ringbuf->tail = 0; - intel_ring_update_space(ringbuf); -} - int intel_engine_idle(struct intel_engine_cs *engine) { struct drm_i915_gem_request *req; @@ -2404,63 +2353,74 @@ int intel_ring_reserve_space(struct drm_i915_gem_request *request) void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size) { - WARN_ON(ringbuf->reserved_size); - WARN_ON(ringbuf->reserved_in_use); - + GEM_BUG_ON(ringbuf->reserved_size); ringbuf->reserved_size = size; } void intel_ring_reserved_space_cancel(struct intel_ringbuffer *ringbuf) { - WARN_ON(ringbuf->reserved_in_use); - + GEM_BUG_ON(!ringbuf->reserved_size); ringbuf->reserved_size = 0; - ringbuf->reserved_in_use = false; } void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf) { - WARN_ON(ringbuf->reserved_in_use); - - ringbuf->reserved_in_use = true; - ringbuf->reserved_tail = ringbuf->tail; + GEM_BUG_ON(!ringbuf->reserved_size); + ringbuf->reserved_size = 0; } void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf) { - WARN_ON(!ringbuf->reserved_in_use); - if (ringbuf->tail > ringbuf->reserved_tail) { - WARN(ringbuf->tail > ringbuf->reserved_tail + ringbuf->reserved_size, - "request reserved size too small: %d vs %d!\n", - ringbuf->tail - ringbuf->reserved_tail, ringbuf->reserved_size); - } else { + GEM_BUG_ON(ringbuf->reserved_size); +} + +static int wait_for_space(struct drm_i915_gem_request *req, int bytes) +{ + struct intel_ringbuffer *ringbuf = req->ringbuf; + struct intel_engine_cs *engine = req->engine; + struct drm_i915_gem_request *target; + + intel_ring_update_space(ringbuf); + if (ringbuf->space >= bytes) + return 0; + + /* The whole point of reserving space is to not wait! */ + GEM_BUG_ON(!ringbuf->reserved_size); + + list_for_each_entry(target, &engine->request_list, list) { + unsigned space; + /* - * The ring was wrapped while the reserved space was in use. - * That means that some unknown amount of the ring tail was - * no-op filled and skipped. Thus simply adding the ring size - * to the tail and doing the above space check will not work. - * Rather than attempt to track how much tail was skipped, - * it is much simpler to say that also skipping the sanity - * check every once in a while is not a big issue. + * The request queue is per-engine, so can contain requests + * from multiple ringbuffers. Here, we must ignore any that + * aren't from the ringbuffer we're considering. */ + if (target->ringbuf != ringbuf) + continue; + + /* Would completion of this request free enough space? */ + space = __intel_ring_space(target->postfix, ringbuf->tail, + ringbuf->size); + if (space >= bytes) + break; } - ringbuf->reserved_size = 0; - ringbuf->reserved_in_use = false; + if (WARN_ON(&target->list == &engine->request_list)) + return -ENOSPC; + + return i915_wait_request(target); } -static int __intel_ring_prepare(struct intel_engine_cs *engine, int bytes) +int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords) { - struct intel_ringbuffer *ringbuf = engine->buffer; - int remain_usable = ringbuf->effective_size - ringbuf->tail; + struct intel_ringbuffer *ringbuf = req->ringbuf; int remain_actual = ringbuf->size - ringbuf->tail; - int ret, total_bytes, wait_bytes = 0; + int remain_usable = ringbuf->effective_size - ringbuf->tail; + int bytes = num_dwords * sizeof(u32); + int total_bytes, wait_bytes; bool need_wrap = false; - if (ringbuf->reserved_in_use) - total_bytes = bytes; - else - total_bytes = bytes + ringbuf->reserved_size; + total_bytes = bytes + ringbuf->reserved_size; if (unlikely(bytes > remain_usable)) { /* @@ -2469,44 +2429,38 @@ static int __intel_ring_prepare(struct intel_engine_cs *engine, int bytes) */ wait_bytes = remain_actual + total_bytes; need_wrap = true; - } else { - if (unlikely(total_bytes > remain_usable)) { - /* - * The base request will fit but the reserved space - * falls off the end. So don't need an immediate wrap - * and only need to effectively wait for the reserved - * size space from the start of ringbuffer. - */ - wait_bytes = remain_actual + ringbuf->reserved_size; - } else if (total_bytes > ringbuf->space) { - /* No wrapping required, just waiting. */ - wait_bytes = total_bytes; - } - } + } else if (unlikely(total_bytes > remain_usable)) { + /* + * The base request will fit but the reserved space + * falls off the end. So we don't need an immediate wrap + * and only need to effectively wait for the reserved + * size space from the start of ringbuffer. + */ + wait_bytes = remain_actual + ringbuf->reserved_size; + } else + /* No wrapping required, just waiting. */ + wait_bytes = total_bytes; - if (wait_bytes) { - ret = ring_wait_for_space(engine, wait_bytes); + if (wait_bytes > ringbuf->space) { + int ret = wait_for_space(req, wait_bytes); if (unlikely(ret)) return ret; - if (need_wrap) - __wrap_ring_buffer(ringbuf); + intel_ring_update_space(ringbuf); } - return 0; -} + if (unlikely(need_wrap)) { + GEM_BUG_ON(remain_actual > ringbuf->space); + GEM_BUG_ON(ringbuf->tail + remain_actual > ringbuf->size); -int intel_ring_begin(struct drm_i915_gem_request *req, - int num_dwords) -{ - struct intel_engine_cs *engine = req->engine; - int ret; - - ret = __intel_ring_prepare(engine, num_dwords * sizeof(uint32_t)); - if (ret) - return ret; + memset(ringbuf->virtual_start + ringbuf->tail, + 0, remain_actual); + ringbuf->tail = 0; + ringbuf->space -= remain_actual; + } - engine->buffer->space -= num_dwords * sizeof(uint32_t); + ringbuf->space -= bytes; + GEM_BUG_ON(ringbuf->space < 0); return 0; } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 2ade194bbea9..201e7752d765 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -108,8 +108,6 @@ struct intel_ringbuffer { int size; int effective_size; int reserved_size; - int reserved_tail; - bool reserved_in_use; /** We track the position of the requests in the ring buffer, and * when each is retired we increment last_retired_head as the GPU @@ -459,7 +457,6 @@ static inline void intel_ring_advance(struct intel_engine_cs *engine) } int __intel_ring_space(int head, int tail, int size); void intel_ring_update_space(struct intel_ringbuffer *ringbuf); -int intel_ring_space(struct intel_ringbuffer *ringbuf); bool intel_engine_stopped(struct intel_engine_cs *engine); int __must_check intel_engine_idle(struct intel_engine_cs *engine);