From patchwork Wed Oct 2 08:15:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Herrmann X-Patchwork-Id: 2973301 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1CF2D9F527 for ; Wed, 2 Oct 2013 08:19:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 89C43203A8 for ; Wed, 2 Oct 2013 08:19:22 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 147C120399 for ; Wed, 2 Oct 2013 08:19:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 12FF0E6485 for ; Wed, 2 Oct 2013 01:19:21 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-ee0-f46.google.com (mail-ee0-f46.google.com [74.125.83.46]) by gabe.freedesktop.org (Postfix) with ESMTP id 17634E61A4 for ; Wed, 2 Oct 2013 01:15:24 -0700 (PDT) Received: by mail-ee0-f46.google.com with SMTP id c13so207686eek.5 for ; Wed, 02 Oct 2013 01:15:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=Gj4Cn9012a+TpJZjWcHPbZGkIm5z2zRjDAaKjx3m+dg=; b=DeZY9PgWlTyFq0hz1GyrnEfTFgwgfcRdxjpjkaLmP1Ux1WX7Glgk9wzz2U2eZf49+o 7WW0Xm1XAntKnn1Mz1GIK+WH3SqCmBfSDPmtYbaz9sAYOGlsIdHrk0nbx8Syqs+3ea2B T0IJk3cN1T3aAKUiHY9tYlcS7fMBTlgjoJyl2jYMXSkWR5YF094ZD/UvGzsDjzUoN/TP ZsYzS/zSNbVVNz9+sS60JDTlLvLCOiS6I7L0+dRKq43Xw2DWVMNXXSEi5C/a3JKdxD6E s0tDDN8uXk9jSTyeG+fZjdx/gc+NRUir6LgrY8OBqV6VDl/ECKz9OMJje5VAPo+KLkQy 5FFA== X-Received: by 10.15.102.71 with SMTP id bq47mr1190773eeb.66.1380701722610; Wed, 02 Oct 2013 01:15:22 -0700 (PDT) Received: from localhost.localdomain (stgt-5f71b885.pool.mediaWays.net. [95.113.184.133]) by mx.google.com with ESMTPSA id z12sm1042155eev.6.1969.12.31.16.00.00 (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Oct 2013 01:15:18 -0700 (PDT) From: David Herrmann To: dri-devel@lists.freedesktop.org Subject: [PATCH v2 1/2] drm/nouveau: embed gem object in nouveau_bo Date: Wed, 2 Oct 2013 10:15:17 +0200 Message-Id: <1380701718-652-1-git-send-email-dh.herrmann@gmail.com> X-Mailer: git-send-email 1.8.4 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is no reason to keep the gem object separately allocated. nouveau is the last user of gem_obj->driver_private, so if we embed it, we can get rid of 8bytes per gem-object. The implementation follows the radeon driver. bo->gem is only valid, iff the bo was created via the gem helpers _and_ iff the user holds a valid gem reference. That is, as the gem object holds a reference to the nouveau_bo. If you use nouveau_ref() to gain a bo reference, you are not guaranteed to also hold a gem reference. The gem object might get destroyed after the last user drops the gem-ref via drm_gem_object_unreference(). Use drm_gem_object_reference() to gain a gem-reference. For debugging, we can use bo->gem.filp != NULL to test whether a gem-bo is valid. However, this shouldn't be used for real functionality to avoid gem-internal dependencies. Note that the implementation follows the previous style. However, we no longer can check for bo->gem != NULL to test for a valid gem object. This wasn't done before, so we should be safe now. Signed-off-by: David Herrmann Acked-by: Maarten Lankhorst Reviewed-by: Ben Skeggs --- drivers/gpu/drm/nouveau/nouveau_abi16.c | 4 ++-- drivers/gpu/drm/nouveau/nouveau_bo.c | 4 ++-- drivers/gpu/drm/nouveau/nouveau_bo.h | 5 ++++- drivers/gpu/drm/nouveau/nouveau_display.c | 10 ++++----- drivers/gpu/drm/nouveau/nouveau_fbcon.c | 2 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 36 +++++++++++++++---------------- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 +- drivers/gpu/drm/nouveau/nouveau_prime.c | 10 +++++---- 8 files changed, 38 insertions(+), 35 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c b/drivers/gpu/drm/nouveau/nouveau_abi16.c index 8f467e7..3897549 100644 --- a/drivers/gpu/drm/nouveau/nouveau_abi16.c +++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c @@ -130,7 +130,7 @@ nouveau_abi16_chan_fini(struct nouveau_abi16 *abi16, if (chan->ntfy) { nouveau_bo_vma_del(chan->ntfy, &chan->ntfy_vma); nouveau_bo_unpin(chan->ntfy); - drm_gem_object_unreference_unlocked(chan->ntfy->gem); + drm_gem_object_unreference_unlocked(&chan->ntfy->gem); } if (chan->heap.block_size) @@ -320,7 +320,7 @@ nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS) goto done; } - ret = drm_gem_handle_create(file_priv, chan->ntfy->gem, + ret = drm_gem_handle_create(file_priv, &chan->ntfy->gem, &init->notifier_handle); if (ret) goto done; diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 755c38d..4172854 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -146,7 +146,7 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo) struct drm_device *dev = drm->dev; struct nouveau_bo *nvbo = nouveau_bo(bo); - if (unlikely(nvbo->gem)) + if (unlikely(nvbo->gem.filp)) DRM_ERROR("bo %p still attached to GEM object\n", bo); WARN_ON(nvbo->pin_refcnt > 0); nv10_bo_put_tile_region(dev, nvbo->tile, NULL); @@ -1267,7 +1267,7 @@ nouveau_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp) { struct nouveau_bo *nvbo = nouveau_bo(bo); - return drm_vma_node_verify_access(&nvbo->gem->vma_node, filp); + return drm_vma_node_verify_access(&nvbo->gem.vma_node, filp); } static int diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h index 653dbbb..ff17c1f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.h +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -27,7 +27,10 @@ struct nouveau_bo { u32 tile_flags; struct nouveau_drm_tile *tile; - struct drm_gem_object *gem; + /* Only valid if allocated via nouveau_gem_new() and iff you hold a + * gem reference to it! For debugging, use gem.filp != NULL to test + * whether it is valid. */ + struct drm_gem_object gem; /* protect by the ttm reservation lock */ int pin_refcnt; diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c index 7848590..bdd5cf7 100644 --- a/drivers/gpu/drm/nouveau/nouveau_display.c +++ b/drivers/gpu/drm/nouveau/nouveau_display.c @@ -50,7 +50,7 @@ nouveau_user_framebuffer_destroy(struct drm_framebuffer *drm_fb) struct nouveau_framebuffer *fb = nouveau_framebuffer(drm_fb); if (fb->nvbo) - drm_gem_object_unreference_unlocked(fb->nvbo->gem); + drm_gem_object_unreference_unlocked(&fb->nvbo->gem); drm_framebuffer_cleanup(drm_fb); kfree(fb); @@ -63,7 +63,7 @@ nouveau_user_framebuffer_create_handle(struct drm_framebuffer *drm_fb, { struct nouveau_framebuffer *fb = nouveau_framebuffer(drm_fb); - return drm_gem_handle_create(file_priv, fb->nvbo->gem, handle); + return drm_gem_handle_create(file_priv, &fb->nvbo->gem, handle); } static const struct drm_framebuffer_funcs nouveau_framebuffer_funcs = { @@ -674,8 +674,8 @@ nouveau_display_dumb_create(struct drm_file *file_priv, struct drm_device *dev, if (ret) return ret; - ret = drm_gem_handle_create(file_priv, bo->gem, &args->handle); - drm_gem_object_unreference_unlocked(bo->gem); + ret = drm_gem_handle_create(file_priv, &bo->gem, &args->handle); + drm_gem_object_unreference_unlocked(&bo->gem); return ret; } @@ -688,7 +688,7 @@ nouveau_display_dumb_map_offset(struct drm_file *file_priv, gem = drm_gem_object_lookup(dev, file_priv, handle); if (gem) { - struct nouveau_bo *bo = gem->driver_private; + struct nouveau_bo *bo = nouveau_gem_object(gem); *poffset = drm_vma_node_offset_addr(&bo->bo.vma_node); drm_gem_object_unreference_unlocked(gem); return 0; diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c index a86ecf6..c80b519 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c @@ -420,7 +420,7 @@ nouveau_fbcon_destroy(struct drm_device *dev, struct nouveau_fbdev *fbcon) nouveau_bo_unmap(nouveau_fb->nvbo); nouveau_bo_vma_del(nouveau_fb->nvbo, &nouveau_fb->vma); nouveau_bo_unpin(nouveau_fb->nvbo); - drm_gem_object_unreference_unlocked(nouveau_fb->nvbo->gem); + drm_gem_object_unreference_unlocked(&nouveau_fb->nvbo->gem); nouveau_fb->nvbo = NULL; } drm_fb_helper_fini(&fbcon->helper); diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index f32b712..6618318 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -43,20 +43,17 @@ nouveau_gem_object_new(struct drm_gem_object *gem) void nouveau_gem_object_del(struct drm_gem_object *gem) { - struct nouveau_bo *nvbo = gem->driver_private; + struct nouveau_bo *nvbo = nouveau_gem_object(gem); struct ttm_buffer_object *bo = &nvbo->bo; - if (!nvbo) - return; - nvbo->gem = NULL; - if (gem->import_attach) drm_prime_gem_destroy(gem, nvbo->bo.sg); - ttm_bo_unref(&bo); - drm_gem_object_release(gem); - kfree(gem); + + /* reset filp so nouveau_bo_del_ttm() can test for it */ + gem->filp = NULL; + ttm_bo_unref(&bo); } int @@ -186,14 +183,15 @@ nouveau_gem_new(struct drm_device *dev, int size, int align, uint32_t domain, if (nv_device(drm->device)->card_type >= NV_50) nvbo->valid_domains &= domain; - nvbo->gem = drm_gem_object_alloc(dev, nvbo->bo.mem.size); - if (!nvbo->gem) { + /* Initialize the embedded gem-object. We return a single gem-reference + * to the caller, instead of a normal nouveau_bo ttm reference. */ + ret = drm_gem_object_init(dev, &nvbo->gem, nvbo->bo.mem.size); + if (ret) { nouveau_bo_ref(NULL, pnvbo); return -ENOMEM; } - nvbo->bo.persistent_swap_storage = nvbo->gem->filp; - nvbo->gem->driver_private = nvbo; + nvbo->bo.persistent_swap_storage = nvbo->gem.filp; return 0; } @@ -250,15 +248,15 @@ nouveau_gem_ioctl_new(struct drm_device *dev, void *data, if (ret) return ret; - ret = drm_gem_handle_create(file_priv, nvbo->gem, &req->info.handle); + ret = drm_gem_handle_create(file_priv, &nvbo->gem, &req->info.handle); if (ret == 0) { - ret = nouveau_gem_info(file_priv, nvbo->gem, &req->info); + ret = nouveau_gem_info(file_priv, &nvbo->gem, &req->info); if (ret) drm_gem_handle_delete(file_priv, req->info.handle); } /* drop reference from allocate - handle holds it now */ - drm_gem_object_unreference_unlocked(nvbo->gem); + drm_gem_object_unreference_unlocked(&nvbo->gem); return ret; } @@ -266,7 +264,7 @@ static int nouveau_gem_set_domain(struct drm_gem_object *gem, uint32_t read_domains, uint32_t write_domains, uint32_t valid_domains) { - struct nouveau_bo *nvbo = gem->driver_private; + struct nouveau_bo *nvbo = nouveau_gem_object(gem); struct ttm_buffer_object *bo = &nvbo->bo; uint32_t domains = valid_domains & nvbo->valid_domains & (write_domains ? write_domains : read_domains); @@ -327,7 +325,7 @@ validate_fini_list(struct list_head *list, struct nouveau_fence *fence, list_del(&nvbo->entry); nvbo->reserved_by = NULL; ttm_bo_unreserve_ticket(&nvbo->bo, ticket); - drm_gem_object_unreference_unlocked(nvbo->gem); + drm_gem_object_unreference_unlocked(&nvbo->gem); } } @@ -376,7 +374,7 @@ retry: validate_fini(op, NULL); return -ENOENT; } - nvbo = gem->driver_private; + nvbo = nouveau_gem_object(gem); if (nvbo == res_bo) { res_bo = NULL; drm_gem_object_unreference_unlocked(gem); @@ -478,7 +476,7 @@ validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli, return ret; } - ret = nouveau_gem_set_domain(nvbo->gem, b->read_domains, + ret = nouveau_gem_set_domain(&nvbo->gem, b->read_domains, b->write_domains, b->valid_domains); if (unlikely(ret)) { diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index 502e429..b535895 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -12,7 +12,7 @@ static inline struct nouveau_bo * nouveau_gem_object(struct drm_gem_object *gem) { - return gem ? gem->driver_private : NULL; + return gem ? container_of(gem, struct nouveau_bo, gem) : NULL; } /* nouveau_gem.c */ diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index e90468d..51a2cb1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -71,14 +71,16 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, return ERR_PTR(ret); nvbo->valid_domains = NOUVEAU_GEM_DOMAIN_GART; - nvbo->gem = drm_gem_object_alloc(dev, nvbo->bo.mem.size); - if (!nvbo->gem) { + + /* Initialize the embedded gem-object. We return a single gem-reference + * to the caller, instead of a normal nouveau_bo ttm reference. */ + ret = drm_gem_object_init(dev, &nvbo->gem, nvbo->bo.mem.size); + if (ret) { nouveau_bo_ref(NULL, &nvbo); return ERR_PTR(-ENOMEM); } - nvbo->gem->driver_private = nvbo; - return nvbo->gem; + return &nvbo->gem; } int nouveau_gem_prime_pin(struct drm_gem_object *obj)