From patchwork Sat May 25 19:27:06 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2614221 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id CC4FADF2A2 for ; Sat, 25 May 2013 19:40:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B1CC0E5E13 for ; Sat, 25 May 2013 12:40:38 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from shiva.localdomain (unknown [209.20.75.48]) by gabe.freedesktop.org (Postfix) with ESMTP id A1DD5E5D40 for ; Sat, 25 May 2013 12:24:59 -0700 (PDT) Received: by shiva.localdomain (Postfix, from userid 1005) id 6433288081; Sat, 25 May 2013 19:24:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on shiva.chad-versace.us X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00, URIBL_BLOCKED autolearn=unavailable version=3.3.2 Received: from lundgren.kumite (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by shiva.localdomain (Postfix) with ESMTPSA id 557CD8874B; Sat, 25 May 2013 19:24:53 +0000 (UTC) From: Ben Widawsky To: Intel GFX Date: Sat, 25 May 2013 12:27:06 -0700 Message-Id: <1369510028-3343-33-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1369510028-3343-1-git-send-email-ben@bwidawsk.net> References: <1369510028-3343-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 32/34] drm/i915: Create VMAs (part 1) X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Creates the VMA, but leaves the old obj->gtt_space in place. This primarily just puts the basic infrastructure in place, and helps check for leaks. BISECT WARNING: This patch was not meant for bisect. If it does end up upstream, it should be included in the 3 part series for creating the VMA. Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_drv.h | 28 ++++++++++++++++++- drivers/gpu/drm/i915/i915_gem.c | 49 +++++++++++++++++++++++++++++++++- drivers/gpu/drm/i915/i915_gem_evict.c | 4 +++ drivers/gpu/drm/i915/i915_gem_gtt.c | 2 ++ drivers/gpu/drm/i915/i915_gem_stolen.c | 12 +++++++++ 5 files changed, 93 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 0f70abe4..324ab0f 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -507,6 +507,18 @@ struct i915_hw_ppgtt { void (*cleanup)(struct i915_hw_ppgtt *ppgtt); }; +/* To make things as simple as possible (ie. no refcounting), a VMA's lifetime + * will always be <= an objects lifetime. So object refcounting should cover us. + */ +struct i915_vma { + struct i915_address_space *vm; + struct drm_i915_gem_object *obj; + struct drm_mm_node node; + /* Page aligned offset (helper for stolen) */ + unsigned long deferred_offset; + + struct list_head vma_link; /* Link in the object's VMA list */ +}; /* This must match up with the value previously used for execbuf2.rsvd1. */ #define DEFAULT_CONTEXT_ID 0 @@ -1148,8 +1160,9 @@ struct drm_i915_gem_object { const struct drm_i915_gem_object_ops *ops; - /** Current space allocated to this object in the GTT, if any. */ struct drm_mm_node *gtt_space; + struct list_head vma_list; + /** Stolen memory for this object, instead of being backed by shmem. */ struct drm_mm_node *stolen; struct list_head gtt_list; @@ -1277,6 +1290,7 @@ struct drm_i915_gem_object { static inline unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o) { + BUG_ON(list_empty(&o->vma_list)); return o->gtt_space->start; } @@ -1287,6 +1301,7 @@ static inline bool i915_gem_obj_bound(struct drm_i915_gem_object *o) static inline unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o) { + BUG_ON(list_empty(&o->vma_list)); return o->gtt_space->size; } @@ -1296,6 +1311,15 @@ static inline void i915_gem_obj_set_color(struct drm_i915_gem_object *o, o->gtt_space->color = color; } +/* This is a temporary define to help transition us to real VMAs. If you see + * this, you're either reviewing code, or bisecting it. */ +static inline struct i915_vma *__i915_obj_to_vma(struct drm_i915_gem_object *obj) +{ + BUG_ON(!i915_gem_obj_bound(obj)); + BUG_ON(list_empty(&obj->vma_list)); + return list_first_entry(&obj->vma_list, struct i915_vma, vma_link); +} + /** * Request queue structure. * @@ -1596,6 +1620,8 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev, size_t size); void i915_gem_free_object(struct drm_gem_object *obj); +struct i915_vma *i915_gem_vma_create(struct drm_i915_gem_object *obj); +void i915_gem_vma_destroy(struct i915_vma *vma); int __must_check i915_gem_object_pin(struct drm_i915_gem_object *obj, uint32_t alignment, diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 159b30f..d82863c 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2475,6 +2475,7 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj) { drm_i915_private_t *dev_priv = obj->base.dev->dev_private; + struct i915_vma *vma; int ret; if (!i915_gem_obj_bound(obj)) @@ -2515,6 +2516,11 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj) /* Avoid an unnecessary call to unbind on rebind. */ obj->map_and_fenceable = true; + vma = __i915_obj_to_vma(obj); + list_del_init(&vma->vma_link); +// drm_mm_remove_node(&vma->node); + i915_gem_vma_destroy(vma); + drm_mm_put_block(obj->gtt_space); obj->gtt_space = NULL; @@ -2946,8 +2952,12 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, bool mappable, fenceable; size_t max = map_and_fenceable ? dev_priv->gtt.mappable_end : dev_priv->gtt.base.total; + struct i915_vma *vma; int ret; + if (WARN_ON(!list_empty(&obj->vma_list))) + return -EBUSY; + fence_size = i915_gem_get_gtt_size(dev, obj->base.size, obj->tiling_mode); @@ -2988,6 +2998,12 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj, i915_gem_object_unpin_pages(obj); return -ENOMEM; } + vma = i915_gem_vma_create(obj); + if (vma == NULL) { + kfree(node); + i915_gem_object_unpin_pages(obj); + return -ENOMEM; + } search_free: ret = drm_mm_insert_node_in_range_generic(&i915_gtt_vm->mm, node, @@ -3024,6 +3040,9 @@ search_free: list_add_tail(&obj->mm_list, &i915_gtt_vm->inactive_list); obj->gtt_space = node; + vma->node.start = node->start; + vma->node.size = node->size; + list_add(&vma->vma_link, &obj->vma_list); fenceable = node->size == fence_size && @@ -3182,6 +3201,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, { struct drm_device *dev = obj->base.dev; drm_i915_private_t *dev_priv = dev->dev_private; + struct drm_mm_node *node = NULL; int ret; if (obj->cache_level == cache_level) @@ -3192,7 +3212,12 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj, return -EBUSY; } - if (!i915_gem_valid_gtt_space(dev, obj->gtt_space, cache_level)) { + if (i915_gem_obj_bound(obj)) { + node = obj->gtt_space; + BUG_ON(node->start != __i915_obj_to_vma(obj)->node.start); + } + + if (!i915_gem_valid_gtt_space(dev, node, cache_level)) { ret = i915_gem_object_unbind(obj); if (ret) return ret; @@ -3737,6 +3762,7 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, INIT_LIST_HEAD(&obj->gtt_list); INIT_LIST_HEAD(&obj->ring_list); INIT_LIST_HEAD(&obj->exec_list); + INIT_LIST_HEAD(&obj->vma_list); obj->ops = ops; @@ -3851,6 +3877,27 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj) i915_gem_object_free(obj); } +struct i915_vma *i915_gem_vma_create(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *dev_priv = obj->base.dev->dev_private; + struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL); + if (vma == NULL) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&vma->vma_link); + vma->vm = i915_gtt_vm; + vma->obj = obj; + + return vma; +} + +void i915_gem_vma_destroy(struct i915_vma *vma) +{ + WARN_ON(!list_empty(&vma->vma_link)); + WARN_ON(vma->node.allocated); + kfree(vma); +} + int i915_gem_idle(struct drm_device *dev) { diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index 92856a2..44f2b99 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -38,6 +38,7 @@ mark_free(struct drm_i915_gem_object *obj, struct list_head *unwind) return false; list_add(&obj->exec_list, unwind); + BUG_ON(__i915_obj_to_vma(obj)->node.start != i915_gem_obj_offset(obj)); return drm_mm_scan_add_block(obj->gtt_space); } @@ -107,6 +108,8 @@ none: struct drm_i915_gem_object, exec_list); + + BUG_ON(__i915_obj_to_vma(obj)->node.start != i915_gem_obj_offset(obj)); ret = drm_mm_scan_remove_block(obj->gtt_space); BUG_ON(ret); @@ -127,6 +130,7 @@ found: obj = list_first_entry(&unwind_list, struct drm_i915_gem_object, exec_list); + BUG_ON(__i915_obj_to_vma(obj)->node.start != i915_gem_obj_offset(obj)); if (drm_mm_scan_remove_block(obj->gtt_space)) { list_move(&obj->exec_list, &eviction_list); drm_gem_object_reference(&obj->base); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 1fab5a8..2e97361 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -663,6 +663,7 @@ void i915_gem_setup_global_gtt(struct drm_device *dev, i915_gem_obj_offset(obj), obj->base.size); BUG_ON((gtt_offset & I915_GTT_RESERVED) == 0); + BUG_ON((__i915_obj_to_vma(obj)->deferred_offset & I915_GTT_RESERVED) == 0); gtt_offset = gtt_offset & ~I915_GTT_RESERVED; obj->gtt_space = kzalloc(sizeof(*obj->gtt_space), GFP_KERNEL); if (!obj->gtt_space) { @@ -676,6 +677,7 @@ void i915_gem_setup_global_gtt(struct drm_device *dev, if (ret) DRM_DEBUG_KMS("Reservation failed\n"); obj->has_global_gtt_mapping = 1; + list_add(&__i915_obj_to_vma(obj)->vma_link, &obj->vma_list); } i915_gtt_vm->start = start; diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 86c8feb..f057b7c 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -322,6 +322,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; + struct i915_vma *vma; int ret; if (dev_priv->gtt.stolen_base == 0) @@ -357,6 +358,11 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, return NULL; } + vma = i915_gem_vma_create(obj); + if (!vma) { + drm_gem_object_unreference(&obj->base); + return NULL; + } /* To simplify the initialisation sequence between KMS and GTT, * we allow construction of the stolen object prior to * setting up the GTT space. The actual reservation will occur @@ -365,6 +371,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, if (drm_mm_initialized(&i915_gtt_vm->mm)) { obj->gtt_space = kzalloc(sizeof(*obj->gtt_space), GFP_KERNEL); if (!obj->gtt_space) { + i915_gem_vma_destroy(vma); drm_gem_object_unreference(&obj->base); return NULL; } @@ -372,15 +379,20 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, gtt_offset, size); if (ret) { DRM_DEBUG_KMS("failed to allocate stolen GTT space\n"); + i915_gem_vma_destroy(vma); drm_gem_object_unreference(&obj->base); kfree(obj->gtt_space); return NULL; } + vma->node.start = obj->gtt_space->start; + vma->node.size = obj->gtt_space->size; obj->gtt_space->start = gtt_offset; + list_add(&vma->vma_link, &obj->vma_list); } else { /* NB: Safe because we assert page alignment */ obj->gtt_space = (struct drm_mm_node *) ((uintptr_t)gtt_offset | I915_GTT_RESERVED); + vma->deferred_offset = gtt_offset | I915_GTT_RESERVED; } obj->has_global_gtt_mapping = 1;