From patchwork Thu Mar 27 17:59:54 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: oscar.mateo@intel.com X-Patchwork-Id: 3898781 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F25189F334 for ; Thu, 27 Mar 2014 17:10:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 985B420237 for ; Thu, 27 Mar 2014 17:10:04 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 24AD520240 for ; Thu, 27 Mar 2014 17:10:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 94EF26EA10; Thu, 27 Mar 2014 10:10:02 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTP id 8FDBF6E9F7 for ; Thu, 27 Mar 2014 10:09:39 -0700 (PDT) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 27 Mar 2014 10:03:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,743,1389772800"; d="scan'208";a="501086757" Received: from omateolo-linux2.iwi.intel.com ([172.28.253.148]) by fmsmga001.fm.intel.com with ESMTP; 27 Mar 2014 10:06:12 -0700 From: oscar.mateo@intel.com To: intel-gfx@lists.freedesktop.org Date: Thu, 27 Mar 2014 17:59:54 +0000 Message-Id: <1395943218-7708-26-git-send-email-oscar.mateo@intel.com> X-Mailer: git-send-email 1.9.0 In-Reply-To: <1395943218-7708-1-git-send-email-oscar.mateo@intel.com> References: <1395943218-7708-1-git-send-email-oscar.mateo@intel.com> Subject: [Intel-gfx] [PATCH 25/49] drm/i915: Final touches to LR contexts plumbing and refactoring X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Mateo Thanks to the previous functions and intel_ringbuffer_get(), every function that needs to be context-aware gets the ringbuffer from the appropriate place (be it the context or the engine itself). Others (either pre-GEN8 or that clearly manipulate the rings's default ringbuffer) get it directly from the engine. Signed-off-by: Oscar Mateo --- drivers/gpu/drm/i915/i915_dma.c | 2 +- drivers/gpu/drm/i915/i915_gem.c | 7 +- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 20 +++--- drivers/gpu/drm/i915/i915_gpu_error.c | 6 +- drivers/gpu/drm/i915/i915_irq.c | 2 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 109 ++++++++++++++++------------- drivers/gpu/drm/i915/intel_ringbuffer.h | 8 +-- 7 files changed, 82 insertions(+), 72 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index 29583da..ea5d965 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -241,7 +241,7 @@ static int i915_dma_resume(struct drm_device * dev) DRM_DEBUG_DRIVER("%s\n", __func__); - if (__get_ringbuf(ring)->virtual_start == NULL) { + if (ring->default_ringbuf.virtual_start == NULL) { DRM_ERROR("can not ioremap virtual address for" " ring buffer\n"); return -ENOMEM; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index a052a80..e3c3c58 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2466,6 +2466,7 @@ i915_gem_retire_requests_ring(struct intel_engine *ring) while (!list_empty(&ring->request_list)) { struct drm_i915_gem_request *request; + struct intel_ringbuffer *ringbuf; request = list_first_entry(&ring->request_list, struct drm_i915_gem_request, @@ -2475,12 +2476,16 @@ i915_gem_retire_requests_ring(struct intel_engine *ring) break; trace_i915_gem_request_retire(ring, request->seqno); + + /* TODO: request->ctx is not correctly updated for LR contexts */ + ringbuf = intel_ringbuffer_get(ring, request->ctx); + /* We know the GPU must have read the request to have * sent us the seqno + interrupt, so use the position * of tail of the request to update the last known position * of the GPU head. */ - __get_ringbuf(ring)->last_retired_head = request->tail; + ringbuf->last_retired_head = request->tail; i915_gem_free_request(request); } diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index c0a1032..fa5a439 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -990,15 +990,15 @@ i915_reset_gen7_sol_offsets(struct drm_device *dev, ringbuf = intel_ringbuffer_begin(ring, ctx, 4 * 3); if (IS_ERR_OR_NULL(ringbuf)) - return PTR_ERR(ringbuf); + return (PTR_ERR(ringbuf)); for (i = 0; i < 4; i++) { - intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1)); - intel_ring_emit(ring, GEN7_SO_WRITE_OFFSET(i)); - intel_ring_emit(ring, 0); + intel_ringbuffer_emit(ringbuf, MI_LOAD_REGISTER_IMM(1)); + intel_ringbuffer_emit(ringbuf, GEN7_SO_WRITE_OFFSET(i)); + intel_ringbuffer_emit(ringbuf, 0); } - intel_ring_advance(ring); + intel_ringbuffer_advance(ringbuf); return 0; } @@ -1239,11 +1239,11 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, goto err; } - intel_ring_emit(ring, MI_NOOP); - intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1)); - intel_ring_emit(ring, INSTPM); - intel_ring_emit(ring, mask << 16 | mode); - intel_ring_advance(ring); + intel_ringbuffer_emit(ringbuf, MI_NOOP); + intel_ringbuffer_emit(ringbuf, MI_LOAD_REGISTER_IMM(1)); + intel_ringbuffer_emit(ringbuf, INSTPM); + intel_ringbuffer_emit(ringbuf, mask << 16 | mode); + intel_ringbuffer_advance(ringbuf); dev_priv->relative_constants_mode = mode; } diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 67a1fc7..0238efe 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -828,8 +828,8 @@ static void i915_record_ring_state(struct drm_device *dev, ering->hws = I915_READ(mmio); } - ering->cpu_ring_head = __get_ringbuf(ring)->head; - ering->cpu_ring_tail = __get_ringbuf(ring)->tail; + ering->cpu_ring_head = ring->default_ringbuf.head; + ering->cpu_ring_tail = ring->default_ringbuf.tail; ering->hangcheck_score = ring->hangcheck.score; ering->hangcheck_action = ring->hangcheck.action; @@ -936,7 +936,7 @@ static void i915_gem_record_rings(struct drm_device *dev, } error->ring[i].ringbuffer = - i915_error_ggtt_object_create(dev_priv, __get_ringbuf(ring)->obj); + i915_error_ggtt_object_create(dev_priv, ring->default_ringbuf.obj); if (ring->status_page.obj) error->ring[i].hws_page = diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 340cf34..1ba8bb3 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -2593,7 +2593,7 @@ static struct intel_engine * semaphore_waits_for(struct intel_engine *ring, u32 *seqno) { struct drm_i915_private *dev_priv = ring->dev->dev_private; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; u32 cmd, ipehr, head; int i; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 54aba64..fba9b05 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -33,10 +33,8 @@ #include "i915_trace.h" #include "intel_drv.h" -static inline int ring_space(struct intel_engine *ring) +static inline int ring_space(struct intel_ringbuffer *ringbuf) { - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); - int space = (ringbuf->head & HEAD_ADDR) - (ringbuf->tail + I915_RING_FREE_SPACE); if (space < 0) space += ringbuf->size; @@ -47,7 +45,7 @@ void intel_ringbuffer_advance_and_submit(struct intel_engine *ring, struct i915_hw_context *ctx) { struct drm_i915_private *dev_priv = ring->dev->dev_private; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = intel_ringbuffer_get(ring, ctx); ringbuf->tail &= ringbuf->size - 1; if (dev_priv->gpu_error.stop_rings & intel_ring_flag(ring)) @@ -401,13 +399,13 @@ gen8_render_ring_flush(struct intel_engine *ring, if (IS_ERR_OR_NULL(ringbuf)) return (PTR_ERR(ringbuf)); - intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6)); - intel_ring_emit(ring, flags); - intel_ring_emit(ring, scratch_addr); - intel_ring_emit(ring, 0); - intel_ring_emit(ring, 0); - intel_ring_emit(ring, 0); - intel_ring_advance(ring); + intel_ringbuffer_emit(ringbuf, GFX_OP_PIPE_CONTROL(6)); + intel_ringbuffer_emit(ringbuf, flags); + intel_ringbuffer_emit(ringbuf, scratch_addr); + intel_ringbuffer_emit(ringbuf, 0); + intel_ringbuffer_emit(ringbuf, 0); + intel_ringbuffer_emit(ringbuf, 0); + intel_ringbuffer_advance(ringbuf); return 0; @@ -451,7 +449,7 @@ static int init_ring_common(struct intel_engine *ring) { struct drm_device *dev = ring->dev; drm_i915_private_t *dev_priv = dev->dev_private; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; struct drm_i915_gem_object *obj = ringbuf->obj; int ret = 0; u32 head; @@ -524,7 +522,7 @@ static int init_ring_common(struct intel_engine *ring) else { ringbuf->head = I915_READ_HEAD(ring); ringbuf->tail = I915_READ_TAIL(ring) & TAIL_ADDR; - ringbuf->space = ring_space(ring); + ringbuf->space = ring_space(ringbuf); ringbuf->last_retired_head = -1; } @@ -538,7 +536,7 @@ out: static int init_ring_common_lrc(struct intel_engine *ring) { - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; ringbuf->head = 0; ringbuf->tail = 0; @@ -741,10 +739,10 @@ gen8_add_request(struct intel_engine *ring, if (IS_ERR_OR_NULL(ringbuf)) return (PTR_ERR(ringbuf)); - intel_ring_emit(ring, MI_STORE_DWORD_INDEX); - intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT); - intel_ring_emit(ring, ring->outstanding_lazy_seqno); - intel_ring_emit(ring, MI_USER_INTERRUPT); + intel_ringbuffer_emit(ringbuf, MI_STORE_DWORD_INDEX); + intel_ringbuffer_emit(ringbuf, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT); + intel_ringbuffer_emit(ringbuf, ring->outstanding_lazy_seqno); + intel_ringbuffer_emit(ringbuf, MI_USER_INTERRUPT); intel_ringbuffer_advance_and_submit(ring, ctx); return 0; @@ -1402,7 +1400,7 @@ static int init_phys_status_page(struct intel_engine *ring) static void destroy_ring_buffer(struct intel_engine *ring) { struct drm_i915_private *dev_priv = ring->dev->dev_private; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; if (dev_priv->lrc_enabled) return; @@ -1417,7 +1415,7 @@ static int alloc_ring_buffer(struct intel_engine *ring) struct drm_device *dev = ring->dev; struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *obj = NULL; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; int ret; if (dev_priv->lrc_enabled) @@ -1454,7 +1452,7 @@ static int intel_init_ring(struct drm_device *dev, { struct drm_i915_gem_object *obj; struct drm_i915_private *dev_priv = dev->dev_private; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; int ret; ring->dev = dev; @@ -1526,7 +1524,7 @@ err_hws: void intel_cleanup_ring(struct intel_engine *ring) { struct drm_i915_private *dev_priv; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; int ret; if (ringbuf->obj == NULL) @@ -1553,10 +1551,11 @@ void intel_cleanup_ring(struct intel_engine *ring) cleanup_status_page(ring); } -static int intel_ring_wait_request(struct intel_engine *ring, int n) +static int intel_ring_wait_request(struct intel_engine *ring, + struct i915_hw_context *ctx, int n) { struct drm_i915_gem_request *request; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = intel_ringbuffer_get(ring, ctx); u32 seqno = 0, tail; int ret; @@ -1564,7 +1563,7 @@ static int intel_ring_wait_request(struct intel_engine *ring, int n) ringbuf->head = ringbuf->last_retired_head; ringbuf->last_retired_head = -1; - ringbuf->space = ring_space(ring); + ringbuf->space = ring_space(ringbuf); if (ringbuf->space >= n) return 0; } @@ -1600,7 +1599,7 @@ static int intel_ring_wait_request(struct intel_engine *ring, int n) return ret; ringbuf->head = tail; - ringbuf->space = ring_space(ring); + ringbuf->space = ring_space(ringbuf); if (WARN_ON(ringbuf->space < n)) return -ENOSPC; @@ -1612,11 +1611,11 @@ static int ring_wait_for_space(struct intel_engine *ring, { struct drm_device *dev = ring->dev; struct drm_i915_private *dev_priv = dev->dev_private; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = intel_ringbuffer_get(ring, ctx); unsigned long end; int ret; - ret = intel_ring_wait_request(ring, n); + ret = intel_ring_wait_request(ring, ctx, n); if (ret != -ENOSPC) return ret; @@ -1633,7 +1632,7 @@ static int ring_wait_for_space(struct intel_engine *ring, do { ringbuf->head = I915_READ_HEAD(ring); - ringbuf->space = ring_space(ring); + ringbuf->space = ring_space(ringbuf); if (ringbuf->space >= n) { trace_i915_ring_wait_end(ring); return 0; @@ -1661,7 +1660,7 @@ static int intel_wrap_ring_buffer(struct intel_engine *ring, struct i915_hw_context *ctx) { uint32_t __iomem *virt; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = intel_ringbuffer_get(ring, ctx); int rem = ringbuf->size - ringbuf->tail; if (ringbuf->space < rem) { @@ -1676,7 +1675,7 @@ static int intel_wrap_ring_buffer(struct intel_engine *ring, iowrite32(MI_NOOP, virt++); ringbuf->tail = 0; - ringbuf->space = ring_space(ring); + ringbuf->space = ring_space(ringbuf); return 0; } @@ -1726,7 +1725,7 @@ intel_ring_alloc_seqno(struct intel_engine *ring) static int __intel_ring_prepare(struct intel_engine *ring, struct i915_hw_context *ctx, int bytes) { - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = intel_ringbuffer_get(ring, ctx); int ret; if (unlikely(ringbuf->tail + bytes > ringbuf->effective_size)) { @@ -1745,6 +1744,17 @@ static int __intel_ring_prepare(struct intel_engine *ring, } struct intel_ringbuffer * +intel_ringbuffer_get(struct intel_engine *ring, struct i915_hw_context *ctx) +{ + struct drm_i915_private *dev_priv = ring->dev->dev_private; + + if (dev_priv->lrc_enabled && ctx) + return ctx->ringbuf; + else + return &ring->default_ringbuf; +} + +struct intel_ringbuffer * intel_ringbuffer_begin(struct intel_engine *ring, struct i915_hw_context *ctx, int num_dwords) @@ -1776,20 +1786,21 @@ intel_ringbuffer_begin(struct intel_engine *ring, int intel_ringbuffer_cacheline_align(struct intel_engine *ring, struct i915_hw_context *ctx) { - int num_dwords = (64 - (__get_ringbuf(ring)->tail & 63)) / sizeof(uint32_t); - int ret; + struct intel_ringbuffer *ringbuf = intel_ringbuffer_get(ring, ctx); + int num_dwords; + num_dwords = (64 - (ringbuf->tail & 63)) / sizeof(uint32_t); if (num_dwords == 0) return 0; - ret = intel_ring_begin(ring, num_dwords); - if (ret) - return ret; + ringbuf = intel_ringbuffer_begin(ring, ctx, num_dwords); + if (IS_ERR_OR_NULL(ringbuf)) + return PTR_ERR(ringbuf); while (num_dwords--) intel_ring_emit(ring, MI_NOOP); - intel_ring_advance(ring); + intel_ringbuffer_advance(ringbuf); return 0; } @@ -1860,11 +1871,11 @@ static int gen8_ring_flush(struct intel_engine *ring, if (invalidate & I915_GEM_GPU_DOMAINS) cmd |= MI_INVALIDATE_TLB | MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW; - intel_ring_emit(ring, cmd); - intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT); - intel_ring_emit(ring, 0); /* upper addr */ - intel_ring_emit(ring, 0); /* value */ - intel_ring_advance(ring); + intel_ringbuffer_emit(ringbuf, cmd); + intel_ringbuffer_emit(ringbuf, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT); + intel_ringbuffer_emit(ringbuf, 0); /* upper addr */ + intel_ringbuffer_emit(ringbuf, 0); /* value */ + intel_ringbuffer_advance(ringbuf); return 0; } @@ -1916,11 +1927,11 @@ gen8_ring_dispatch_execbuffer(struct intel_engine *ring, return (PTR_ERR(ringbuf)); /* FIXME(BDW): Address space and security selectors. */ - intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8)); - intel_ring_emit(ring, offset); - intel_ring_emit(ring, 0); - intel_ring_emit(ring, MI_NOOP); - intel_ring_advance(ring); + intel_ringbuffer_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8)); + intel_ringbuffer_emit(ringbuf, offset); + intel_ringbuffer_emit(ringbuf, 0); + intel_ringbuffer_emit(ringbuf, MI_NOOP); + intel_ringbuffer_advance(ringbuf); return 0; } @@ -2112,7 +2123,7 @@ int intel_render_ring_init_dri(struct drm_device *dev, u64 start, u32 size) { drm_i915_private_t *dev_priv = dev->dev_private; struct intel_engine *ring = &dev_priv->ring[RCS]; - struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct intel_ringbuffer *ringbuf = &ring->default_ringbuf; int ret; if (INTEL_INFO(dev)->gen >= 6) { diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 101d4d4..3b0f28b 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -210,16 +210,10 @@ struct intel_engine { u32 (*get_cmd_length_mask)(u32 cmd_header); }; -/* This is a temporary define to help us transition to per-context ringbuffers */ -static inline struct intel_ringbuffer *__get_ringbuf(struct intel_engine *ring) -{ - return &ring->default_ringbuf; -} - static inline bool intel_ring_initialized(struct intel_engine *ring) { - return __get_ringbuf(ring)->obj != NULL; + return ring->default_ringbuf.obj != NULL; } static inline unsigned