From patchwork Mon Jul 22 02:08:16 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2831004 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 607539F4D4 for ; Mon, 22 Jul 2013 02:50:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 63E1C20131 for ; Mon, 22 Jul 2013 02:50:55 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 864E320109 for ; Mon, 22 Jul 2013 02:50:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0D26BE644B for ; Sun, 21 Jul 2013 19:50:54 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from shiva.localdomain (unknown [209.20.75.48]) by gabe.freedesktop.org (Postfix) with ESMTP id AA5BFE639C for ; Sun, 21 Jul 2013 19:09:00 -0700 (PDT) Received: by shiva.localdomain (Postfix, from userid 99) id 077FF886A5; Mon, 22 Jul 2013 02:08:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lundgren.kumite (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by shiva.localdomain (Postfix) with ESMTPSA id B5522886A6; Mon, 22 Jul 2013 02:08:29 +0000 (UTC) From: Ben Widawsky To: Intel GFX Date: Sun, 21 Jul 2013 19:08:16 -0700 Message-Id: <1374458899-8635-10-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.3.3 In-Reply-To: <1374458899-8635-1-git-send-email-ben@bwidawsk.net> References: <1374458899-8635-1-git-send-email-ben@bwidawsk.net> Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH 09/12] drm/i915: create vmas at execbuf X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP In order to transition more of our code over to using a VMA instead of an pair - we must have the vma accessible at execbuf time. Up until now, we've only had a VMA when actually binding an object. The previous patch helped handle the distinction on bound vs. unbound. This patch will help us catch leaks, and other issues before we actually shuffle a bunch of stuff around. The subsequent patch to fix up the rest of execbuf should be mostly just moving code around, and this is the major functional change. Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_drv.h | 3 +++ drivers/gpu/drm/i915/i915_gem.c | 26 ++++++++++++++++++-------- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 10 ++++++++-- 3 files changed, 29 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 8d6aa34..59a8c03 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1867,6 +1867,9 @@ void i915_gem_obj_set_color(struct drm_i915_gem_object *o, enum i915_cache_level color); struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj, struct i915_address_space *vm); +struct i915_vma * +i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj, + struct i915_address_space *vm); /* Some GGTT VM helpers */ #define obj_to_ggtt(obj) \ (&((struct drm_i915_private *)(obj)->base.dev->dev_private)->gtt.base) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index a6dc653..0fa6667 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3111,9 +3111,6 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj, struct i915_vma *vma; int ret; - if (WARN_ON(!list_empty(&obj->vma_list))) - return -EBUSY; - BUG_ON(!i915_is_ggtt(vm)); fence_size = i915_gem_get_gtt_size(dev, @@ -3154,15 +3151,15 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj, i915_gem_object_pin_pages(obj); - /* For now we only ever use 1 vma per object */ - WARN_ON(!list_empty(&obj->vma_list)); - - vma = i915_gem_vma_create(obj, vm); + vma = i915_gem_obj_lookup_or_create_vma(obj, vm); if (IS_ERR(vma)) { i915_gem_object_unpin_pages(obj); return PTR_ERR(vma); } + /* For now we only ever use 1 vma per object */ + WARN_ON(!list_is_singular(&obj->vma_list)); + search_free: ret = drm_mm_insert_node_in_range_generic(&vm->mm, &vma->node, size, alignment, @@ -4054,7 +4051,7 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj) struct i915_vma *i915_gem_vma_create(struct drm_i915_gem_object *obj, struct i915_address_space *vm) { - struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL); + struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_ATOMIC); if (vma == NULL) return ERR_PTR(-ENOMEM); @@ -4829,3 +4826,16 @@ struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj, return NULL; } + +struct i915_vma * +i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj, + struct i915_address_space *vm) +{ + struct i915_vma *vma; + + vma = i915_gem_obj_to_vma(obj, vm); + if (!vma) + vma = i915_gem_vma_create(obj, vm); + + return vma; +} diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 6359ef2..1f82a04 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -85,12 +85,14 @@ static int eb_lookup_objects(struct eb_objects *eb, struct drm_i915_gem_exec_object2 *exec, const struct drm_i915_gem_execbuffer2 *args, + struct i915_address_space *vm, struct drm_file *file) { int i; spin_lock(&file->table_lock); for (i = 0; i < args->buffer_count; i++) { + struct i915_vma *vma; struct drm_i915_gem_object *obj; obj = to_intel_bo(idr_find(&file->object_idr, exec[i].handle)); @@ -111,6 +113,10 @@ eb_lookup_objects(struct eb_objects *eb, drm_gem_object_reference(&obj->base); list_add_tail(&obj->exec_list, &eb->objects); + vma = i915_gem_obj_lookup_or_create_vma(obj, vm); + if (IS_ERR(vma)) + return PTR_ERR(vma); + obj->exec_entry = &exec[i]; if (eb->and < 0) { eb->lut[i] = obj; @@ -666,7 +672,7 @@ i915_gem_execbuffer_relocate_slow(struct drm_device *dev, /* reacquire the objects */ eb_reset(eb); - ret = eb_lookup_objects(eb, exec, args, file); + ret = eb_lookup_objects(eb, exec, args, vm, file); if (ret) goto err; @@ -1001,7 +1007,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, } /* Look up object handles */ - ret = eb_lookup_objects(eb, exec, args, file); + ret = eb_lookup_objects(eb, exec, args, vm, file); if (ret) goto err;