From patchwork Thu Mar 27 17:59:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: oscar.mateo@intel.com X-Patchwork-Id: 3898881 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 906C09F2E8 for ; Thu, 27 Mar 2014 17:12:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C8AD32024D for ; Thu, 27 Mar 2014 17:12:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id D5B1020240 for ; Thu, 27 Mar 2014 17:12:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7CB716EA21; Thu, 27 Mar 2014 10:12:26 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by gabe.freedesktop.org (Postfix) with ESMTP id B6DE36EA1C for ; Thu, 27 Mar 2014 10:12:24 -0700 (PDT) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP; 27 Mar 2014 10:07:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,743,1389772800"; d="scan'208";a="501086412" Received: from omateolo-linux2.iwi.intel.com ([172.28.253.148]) by fmsmga001.fm.intel.com with ESMTP; 27 Mar 2014 10:05:49 -0700 From: oscar.mateo@intel.com To: intel-gfx@lists.freedesktop.org Date: Thu, 27 Mar 2014 17:59:40 +0000 Message-Id: <1395943218-7708-12-git-send-email-oscar.mateo@intel.com> X-Mailer: git-send-email 1.9.0 In-Reply-To: <1395943218-7708-1-git-send-email-oscar.mateo@intel.com> References: <1395943218-7708-1-git-send-email-oscar.mateo@intel.com> Subject: [Intel-gfx] [PATCH 11/49] drm/i915: Split the ringbuffers and the rings X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Oscar Mateo Following the logic behind the previous patch, the ringbuffers and the rings belong in different structs. We keep the relationship between the two via the default_ringbuf living inside each ring/engine. This commit should not introduce functional changes (unless I made an error, this is). Signed-off-by: Oscar Mateo --- drivers/gpu/drm/i915/i915_dma.c | 25 +++--- drivers/gpu/drm/i915/i915_gem.c | 2 +- drivers/gpu/drm/i915/i915_gpu_error.c | 6 +- drivers/gpu/drm/i915/i915_irq.c | 9 ++- drivers/gpu/drm/i915/intel_ringbuffer.c | 136 ++++++++++++++++++-------------- drivers/gpu/drm/i915/intel_ringbuffer.h | 61 ++++++++------ 6 files changed, 136 insertions(+), 103 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index 43c5df0..288e1c9 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -47,6 +47,8 @@ #define LP_RING(d) (&((struct drm_i915_private *)(d))->ring[RCS]) +#define LP_RINGBUF(d) (&((struct drm_i915_private *)(d))->ring[RCS].default_ringbuf) + #define BEGIN_LP_RING(n) \ intel_ring_begin(LP_RING(dev_priv), (n)) @@ -63,7 +65,7 @@ * has access to the ring. */ #define RING_LOCK_TEST_WITH_RETURN(dev, file) do { \ - if (LP_RING(dev->dev_private)->obj == NULL) \ + if (LP_RINGBUF(dev->dev_private)->obj == NULL) \ LOCK_TEST_WITH_RETURN(dev, file); \ } while (0) @@ -140,6 +142,7 @@ void i915_kernel_lost_context(struct drm_device * dev) drm_i915_private_t *dev_priv = dev->dev_private; struct drm_i915_master_private *master_priv; struct intel_engine *ring = LP_RING(dev_priv); + struct intel_ringbuffer *ringbuf = LP_RINGBUF(dev_priv); /* * We should never lose context on the ring with modesetting @@ -148,17 +151,17 @@ void i915_kernel_lost_context(struct drm_device * dev) if (drm_core_check_feature(dev, DRIVER_MODESET)) return; - ring->head = I915_READ_HEAD(ring) & HEAD_ADDR; - ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR; - ring->space = ring->head - (ring->tail + I915_RING_FREE_SPACE); - if (ring->space < 0) - ring->space += ring->size; + ringbuf->head = I915_READ_HEAD(ring) & HEAD_ADDR; + ringbuf->tail = I915_READ_TAIL(ring) & TAIL_ADDR; + ringbuf->space = ringbuf->head - (ringbuf->tail + I915_RING_FREE_SPACE); + if (ringbuf->space < 0) + ringbuf->space += ringbuf->size; if (!dev->primary->master) return; master_priv = dev->primary->master->driver_priv; - if (ring->head == ring->tail && master_priv->sarea_priv) + if (ringbuf->head == ringbuf->tail && master_priv->sarea_priv) master_priv->sarea_priv->perf_boxes |= I915_BOX_RING_EMPTY; } @@ -201,7 +204,7 @@ static int i915_initialize(struct drm_device * dev, drm_i915_init_t * init) } if (init->ring_size != 0) { - if (LP_RING(dev_priv)->obj != NULL) { + if (LP_RINGBUF(dev_priv)->obj != NULL) { i915_dma_cleanup(dev); DRM_ERROR("Client tried to initialize ringbuffer in " "GEM mode\n"); @@ -238,7 +241,7 @@ static int i915_dma_resume(struct drm_device * dev) DRM_DEBUG_DRIVER("%s\n", __func__); - if (ring->virtual_start == NULL) { + if (__get_ringbuf(ring)->virtual_start == NULL) { DRM_ERROR("can not ioremap virtual address for" " ring buffer\n"); return -ENOMEM; @@ -360,7 +363,7 @@ static int i915_emit_cmds(struct drm_device * dev, int *buffer, int dwords) drm_i915_private_t *dev_priv = dev->dev_private; int i, ret; - if ((dwords+1) * sizeof(int) >= LP_RING(dev_priv)->size - 8) + if ((dwords+1) * sizeof(int) >= LP_RINGBUF(dev_priv)->size - 8) return -EINVAL; for (i = 0; i < dwords;) { @@ -823,7 +826,7 @@ static int i915_irq_emit(struct drm_device *dev, void *data, if (drm_core_check_feature(dev, DRIVER_MODESET)) return -ENODEV; - if (!dev_priv || !LP_RING(dev_priv)->virtual_start) { + if (!dev_priv || !LP_RINGBUF(dev_priv)->virtual_start) { DRM_ERROR("called with no initialization\n"); return -EINVAL; } diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 37df622..26b89e9 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2479,7 +2479,7 @@ i915_gem_retire_requests_ring(struct intel_engine *ring) * of tail of the request to update the last known position * of the GPU head. */ - ring->last_retired_head = request->tail; + __get_ringbuf(ring)->last_retired_head = request->tail; i915_gem_free_request(request); } diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 83d8db5..67a1fc7 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -828,8 +828,8 @@ static void i915_record_ring_state(struct drm_device *dev, ering->hws = I915_READ(mmio); } - ering->cpu_ring_head = ring->head; - ering->cpu_ring_tail = ring->tail; + ering->cpu_ring_head = __get_ringbuf(ring)->head; + ering->cpu_ring_tail = __get_ringbuf(ring)->tail; ering->hangcheck_score = ring->hangcheck.score; ering->hangcheck_action = ring->hangcheck.action; @@ -936,7 +936,7 @@ static void i915_gem_record_rings(struct drm_device *dev, } error->ring[i].ringbuffer = - i915_error_ggtt_object_create(dev_priv, ring->obj); + i915_error_ggtt_object_create(dev_priv, __get_ringbuf(ring)->obj); if (ring->status_page.obj) error->ring[i].hws_page = diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index d30a30b..340cf34 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -1075,7 +1075,7 @@ static void ironlake_rps_change_irq_handler(struct drm_device *dev) static void notify_ring(struct drm_device *dev, struct intel_engine *ring) { - if (ring->obj == NULL) + if (!intel_ring_initialized(ring)) return; trace_i915_gem_request_complete(ring); @@ -2593,6 +2593,7 @@ static struct intel_engine * semaphore_waits_for(struct intel_engine *ring, u32 *seqno) { struct drm_i915_private *dev_priv = ring->dev->dev_private; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); u32 cmd, ipehr, head; int i; @@ -2615,10 +2616,10 @@ semaphore_waits_for(struct intel_engine *ring, u32 *seqno) * our ring is smaller than what the hardware (and hence * HEAD_ADDR) allows. Also handles wrap-around. */ - head &= ring->size - 1; + head &= ringbuf->size - 1; /* This here seems to blow up */ - cmd = ioread32(ring->virtual_start + head); + cmd = ioread32(ringbuf->virtual_start + head); if (cmd == ipehr) break; @@ -2628,7 +2629,7 @@ semaphore_waits_for(struct intel_engine *ring, u32 *seqno) if (!i) return NULL; - *seqno = ioread32(ring->virtual_start + head + 4) + 1; + *seqno = ioread32(ringbuf->virtual_start + head + 4) + 1; return semaphore_wait_to_signaller_ring(ring, ipehr); } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 9387196..0da4289 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -35,20 +35,23 @@ static inline int ring_space(struct intel_engine *ring) { - int space = (ring->head & HEAD_ADDR) - (ring->tail + I915_RING_FREE_SPACE); + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + + int space = (ringbuf->head & HEAD_ADDR) - (ringbuf->tail + I915_RING_FREE_SPACE); if (space < 0) - space += ring->size; + space += ringbuf->size; return space; } void __intel_ring_advance(struct intel_engine *ring) { struct drm_i915_private *dev_priv = ring->dev->dev_private; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); - ring->tail &= ring->size - 1; + ringbuf->tail &= ringbuf->size - 1; if (dev_priv->gpu_error.stop_rings & intel_ring_flag(ring)) return; - ring->write_tail(ring, ring->tail); + ring->write_tail(ring, ringbuf->tail); } static int @@ -434,7 +437,8 @@ static int init_ring_common(struct intel_engine *ring) { struct drm_device *dev = ring->dev; drm_i915_private_t *dev_priv = dev->dev_private; - struct drm_i915_gem_object *obj = ring->obj; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + struct drm_i915_gem_object *obj = ringbuf->obj; int ret = 0; u32 head; @@ -483,7 +487,7 @@ static int init_ring_common(struct intel_engine *ring) * register values. */ I915_WRITE_START(ring, i915_gem_obj_ggtt_offset(obj)); I915_WRITE_CTL(ring, - ((ring->size - PAGE_SIZE) & RING_NR_PAGES) + ((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID); /* If the head is still not zero, the ring is dead */ @@ -504,10 +508,10 @@ static int init_ring_common(struct intel_engine *ring) if (!drm_core_check_feature(ring->dev, DRIVER_MODESET)) i915_kernel_lost_context(ring->dev); else { - ring->head = I915_READ_HEAD(ring); - ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR; - ring->space = ring_space(ring); - ring->last_retired_head = -1; + ringbuf->head = I915_READ_HEAD(ring); + ringbuf->tail = I915_READ_TAIL(ring) & TAIL_ADDR; + ringbuf->space = ring_space(ring); + ringbuf->last_retired_head = -1; } memset(&ring->hangcheck, 0, sizeof(ring->hangcheck)); @@ -1334,21 +1338,24 @@ static int init_phys_status_page(struct intel_engine *ring) static void destroy_ring_buffer(struct intel_engine *ring) { - i915_gem_object_ggtt_unpin(ring->obj); - drm_gem_object_unreference(&ring->obj->base); - ring->obj = NULL; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + + i915_gem_object_ggtt_unpin(ringbuf->obj); + drm_gem_object_unreference(&ringbuf->obj->base); + ringbuf->obj = NULL; } static int alloc_ring_buffer(struct intel_engine *ring) { struct drm_device *dev = ring->dev; struct drm_i915_gem_object *obj = NULL; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); int ret; if (!HAS_LLC(dev)) - obj = i915_gem_object_create_stolen(dev, ring->size); + obj = i915_gem_object_create_stolen(dev, ringbuf->size); if (obj == NULL) - obj = i915_gem_alloc_object(dev, ring->size); + obj = i915_gem_alloc_object(dev, ringbuf->size); if (obj == NULL) { DRM_ERROR("Failed to allocate ringbuffer\n"); return -ENOMEM; @@ -1366,7 +1373,7 @@ static int alloc_ring_buffer(struct intel_engine *ring) return ret; } - ring->obj = obj; + ringbuf->obj = obj; return 0; } @@ -1376,12 +1383,13 @@ static int intel_init_ring_buffer(struct drm_device *dev, { struct drm_i915_gem_object *obj; struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); int ret; ring->dev = dev; INIT_LIST_HEAD(&ring->active_list); INIT_LIST_HEAD(&ring->request_list); - ring->size = 32 * PAGE_SIZE; + ringbuf->size = 32 * PAGE_SIZE; memset(ring->sync_seqno, 0, sizeof(ring->sync_seqno)); init_waitqueue_head(&ring->irq_queue); @@ -1401,12 +1409,12 @@ static int intel_init_ring_buffer(struct drm_device *dev, if (ret) goto err_hws; - obj = ring->obj; + obj = ringbuf->obj; - ring->virtual_start = + ringbuf->virtual_start = ioremap_wc(dev_priv->gtt.mappable_base + i915_gem_obj_ggtt_offset(obj), - ring->size); - if (ring->virtual_start == NULL) { + ringbuf->size); + if (ringbuf->virtual_start == NULL) { DRM_ERROR("Failed to map ringbuffer.\n"); ret = -EINVAL; goto destroy_ring; @@ -1420,16 +1428,16 @@ static int intel_init_ring_buffer(struct drm_device *dev, * the TAIL pointer points to within the last 2 cachelines * of the buffer. */ - ring->effective_size = ring->size; + ringbuf->effective_size = ringbuf->size; if (IS_I830(ring->dev) || IS_845G(ring->dev)) - ring->effective_size -= 128; + ringbuf->effective_size -= 128; i915_cmd_parser_init_ring(ring); return 0; err_unmap: - iounmap(ring->virtual_start); + iounmap(ringbuf->virtual_start); destroy_ring: destroy_ring_buffer(ring); err_hws: @@ -1440,9 +1448,10 @@ err_hws: void intel_cleanup_ring_buffer(struct intel_engine *ring) { struct drm_i915_private *dev_priv; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); int ret; - if (ring->obj == NULL) + if (ringbuf->obj == NULL) return; /* Disable the ring buffer. The ring must be idle at this point */ @@ -1454,7 +1463,7 @@ void intel_cleanup_ring_buffer(struct intel_engine *ring) I915_WRITE_CTL(ring, 0); - iounmap(ring->virtual_start); + iounmap(ringbuf->virtual_start); destroy_ring_buffer(ring); ring->preallocated_lazy_request = NULL; @@ -1469,15 +1478,16 @@ void intel_cleanup_ring_buffer(struct intel_engine *ring) static int intel_ring_wait_request(struct intel_engine *ring, int n) { struct drm_i915_gem_request *request; + struct intel_ringbuffer *ring_buf = __get_ringbuf(ring); u32 seqno = 0, tail; int ret; - if (ring->last_retired_head != -1) { - ring->head = ring->last_retired_head; - ring->last_retired_head = -1; + if (ring_buf->last_retired_head != -1) { + ring_buf->head = ring_buf->last_retired_head; + ring_buf->last_retired_head = -1; - ring->space = ring_space(ring); - if (ring->space >= n) + ring_buf->space = ring_space(ring); + if (ring_buf->space >= n) return 0; } @@ -1487,9 +1497,9 @@ static int intel_ring_wait_request(struct intel_engine *ring, int n) if (request->tail == -1) continue; - space = request->tail - (ring->tail + I915_RING_FREE_SPACE); + space = request->tail - (ring_buf->tail + I915_RING_FREE_SPACE); if (space < 0) - space += ring->size; + space += ring_buf->size; if (space >= n) { seqno = request->seqno; tail = request->tail; @@ -1511,9 +1521,9 @@ static int intel_ring_wait_request(struct intel_engine *ring, int n) if (ret) return ret; - ring->head = tail; - ring->space = ring_space(ring); - if (WARN_ON(ring->space < n)) + ring_buf->head = tail; + ring_buf->space = ring_space(ring); + if (WARN_ON(ring_buf->space < n)) return -ENOSPC; return 0; @@ -1523,6 +1533,7 @@ static int ring_wait_for_space(struct intel_engine *ring, int n) { struct drm_device *dev = ring->dev; struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); unsigned long end; int ret; @@ -1542,9 +1553,9 @@ static int ring_wait_for_space(struct intel_engine *ring, int n) end = jiffies + 60 * HZ; do { - ring->head = I915_READ_HEAD(ring); - ring->space = ring_space(ring); - if (ring->space >= n) { + ringbuf->head = I915_READ_HEAD(ring); + ringbuf->space = ring_space(ring); + if (ringbuf->space >= n) { trace_i915_ring_wait_end(ring); return 0; } @@ -1570,21 +1581,22 @@ static int ring_wait_for_space(struct intel_engine *ring, int n) static int intel_wrap_ring_buffer(struct intel_engine *ring) { uint32_t __iomem *virt; - int rem = ring->size - ring->tail; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + int rem = ringbuf->size - ringbuf->tail; - if (ring->space < rem) { + if (ringbuf->space < rem) { int ret = ring_wait_for_space(ring, rem); if (ret) return ret; } - virt = ring->virtual_start + ring->tail; + virt = ringbuf->virtual_start + ringbuf->tail; rem /= 4; while (rem--) iowrite32(MI_NOOP, virt++); - ring->tail = 0; - ring->space = ring_space(ring); + ringbuf->tail = 0; + ringbuf->space = ring_space(ring); return 0; } @@ -1634,15 +1646,16 @@ intel_ring_alloc_seqno(struct intel_engine *ring) static int __intel_ring_prepare(struct intel_engine *ring, int bytes) { + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); int ret; - if (unlikely(ring->tail + bytes > ring->effective_size)) { + if (unlikely(ringbuf->tail + bytes > ringbuf->effective_size)) { ret = intel_wrap_ring_buffer(ring); if (unlikely(ret)) return ret; } - if (unlikely(ring->space < bytes)) { + if (unlikely(ringbuf->space < bytes)) { ret = ring_wait_for_space(ring, bytes); if (unlikely(ret)) return ret; @@ -1671,14 +1684,14 @@ int intel_ring_begin(struct intel_engine *ring, if (ret) return ret; - ring->space -= num_dwords * sizeof(uint32_t); + __get_ringbuf(ring)->space -= num_dwords * sizeof(uint32_t); return 0; } /* Align the ring tail to a cacheline boundary */ int intel_ring_cacheline_align(struct intel_engine *ring) { - int num_dwords = (64 - (ring->tail & 63)) / sizeof(uint32_t); + int num_dwords = (64 - (__get_ringbuf(ring)->tail & 63)) / sizeof(uint32_t); int ret; if (num_dwords == 0) @@ -1990,6 +2003,7 @@ int intel_render_ring_init_dri(struct drm_device *dev, u64 start, u32 size) { drm_i915_private_t *dev_priv = dev->dev_private; struct intel_engine *ring = &dev_priv->ring[RCS]; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); int ret; if (INTEL_INFO(dev)->gen >= 6) { @@ -2029,13 +2043,13 @@ int intel_render_ring_init_dri(struct drm_device *dev, u64 start, u32 size) INIT_LIST_HEAD(&ring->active_list); INIT_LIST_HEAD(&ring->request_list); - ring->size = size; - ring->effective_size = ring->size; + ringbuf->size = size; + ringbuf->effective_size = ringbuf->size; if (IS_I830(ring->dev) || IS_845G(ring->dev)) - ring->effective_size -= 128; + ringbuf->effective_size -= 128; - ring->virtual_start = ioremap_wc(start, size); - if (ring->virtual_start == NULL) { + ringbuf->virtual_start = ioremap_wc(start, size); + if (ringbuf->virtual_start == NULL) { DRM_ERROR("can not ioremap virtual address for" " ring buffer\n"); return -ENOMEM; @@ -2227,15 +2241,15 @@ void intel_init_rings_early(struct drm_device *dev) dev_priv->ring[RCS].id = RCS; dev_priv->ring[RCS].mmio_base = RENDER_RING_BASE; dev_priv->ring[RCS].dev = dev; - dev_priv->ring[RCS].head = 0; - dev_priv->ring[RCS].tail = 0; + dev_priv->ring[RCS].default_ringbuf.head = 0; + dev_priv->ring[RCS].default_ringbuf.tail = 0; dev_priv->ring[BCS].name = "blitter ring"; dev_priv->ring[BCS].id = BCS; dev_priv->ring[BCS].mmio_base = BLT_RING_BASE; dev_priv->ring[BCS].dev = dev; - dev_priv->ring[BCS].head = 0; - dev_priv->ring[BCS].tail = 0; + dev_priv->ring[BCS].default_ringbuf.head = 0; + dev_priv->ring[BCS].default_ringbuf.tail = 0; dev_priv->ring[VCS].name = "bsd ring"; dev_priv->ring[VCS].id = VCS; @@ -2244,13 +2258,13 @@ void intel_init_rings_early(struct drm_device *dev) else dev_priv->ring[VCS].mmio_base = BSD_RING_BASE; dev_priv->ring[VCS].dev = dev; - dev_priv->ring[VCS].head = 0; - dev_priv->ring[VCS].tail = 0; + dev_priv->ring[VCS].default_ringbuf.head = 0; + dev_priv->ring[VCS].default_ringbuf.tail = 0; dev_priv->ring[VECS].name = "video enhancement ring"; dev_priv->ring[VECS].id = VECS; dev_priv->ring[VECS].mmio_base = VEBOX_RING_BASE; dev_priv->ring[VECS].dev = dev; - dev_priv->ring[VECS].head = 0; - dev_priv->ring[VECS].tail = 0; + dev_priv->ring[VECS].default_ringbuf.head = 0; + dev_priv->ring[VECS].default_ringbuf.tail = 0; } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index a7c40a8..2281228 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -55,6 +55,27 @@ struct intel_ring_hangcheck { struct i915_hw_context; +struct intel_ringbuffer { + struct drm_i915_gem_object *obj; + void __iomem *virtual_start; + + u32 head; + u32 tail; + int space; + int size; + int effective_size; + + /** We track the position of the requests in the ring buffer, and + * when each is retired we increment last_retired_head as the GPU + * must have finished processing the request and so we know we + * can advance the ringbuffer up to that position. + * + * last_retired_head is set to -1 after the value is consumed so + * we can detect new retirements. + */ + u32 last_retired_head; +}; + struct intel_engine { const char *name; enum intel_ring_id { @@ -63,29 +84,13 @@ struct intel_engine { BCS, VECS, } id; + struct intel_ringbuffer default_ringbuf; #define I915_NUM_RINGS 4 u32 mmio_base; - void __iomem *virtual_start; struct drm_device *dev; - struct drm_i915_gem_object *obj; - u32 head; - u32 tail; - int space; - int size; - int effective_size; struct intel_hw_status_page status_page; - /** We track the position of the requests in the ring buffer, and - * when each is retired we increment last_retired_head as the GPU - * must have finished processing the request and so we know we - * can advance the ringbuffer up to that position. - * - * last_retired_head is set to -1 after the value is consumed so - * we can detect new retirements. - */ - u32 last_retired_head; - unsigned irq_refcount; /* protected by dev_priv->irq_lock */ u32 irq_enable_mask; /* bitmask to enable ring interrupt */ u32 trace_irq_seqno; @@ -128,7 +133,7 @@ struct intel_engine { /** * List of objects currently involved in rendering from the - * ringbuffer. + * engine. * * Includes buffers having the contents of their GPU caches * flushed, not necessarily primitives. last_rendering_seqno @@ -202,10 +207,16 @@ struct intel_engine { u32 (*get_cmd_length_mask)(u32 cmd_header); }; +/* This is a temporary define to help us transition to per-context ringbuffers */ +static inline struct intel_ringbuffer *__get_ringbuf(struct intel_engine *ring) +{ + return &ring->default_ringbuf; +} + static inline bool intel_ring_initialized(struct intel_engine *ring) { - return ring->obj != NULL; + return __get_ringbuf(ring)->obj != NULL; } static inline unsigned @@ -275,12 +286,16 @@ int __must_check intel_ring_cacheline_align(struct intel_engine *ring); static inline void intel_ring_emit(struct intel_engine *ring, u32 data) { - iowrite32(data, ring->virtual_start + ring->tail); - ring->tail += 4; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + + iowrite32(data, ringbuf->virtual_start + ringbuf->tail); + ringbuf->tail += 4; } static inline void intel_ring_advance(struct intel_engine *ring) { - ring->tail &= ring->size - 1; + struct intel_ringbuffer *ringbuf = __get_ringbuf(ring); + + ringbuf->tail &= ringbuf->size - 1; } void __intel_ring_advance(struct intel_engine *ring); @@ -300,7 +315,7 @@ void intel_ring_setup_status_page(struct intel_engine *ring); static inline u32 intel_ring_get_tail(struct intel_engine *ring) { - return ring->tail; + return __get_ringbuf(ring)->tail; } static inline u32 intel_ring_get_seqno(struct intel_engine *ring)