From patchwork Wed Jul 3 21:45:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 2820611 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C947BBF4A1 for ; Wed, 3 Jul 2013 21:42:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AACD720272 for ; Wed, 3 Jul 2013 21:42:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 9C46A20271 for ; Wed, 3 Jul 2013 21:42:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8BB41E651F for ; Wed, 3 Jul 2013 14:42:29 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from shiva.localdomain (unknown [209.20.75.48]) by gabe.freedesktop.org (Postfix) with ESMTP id DC678E622B; Wed, 3 Jul 2013 14:42:13 -0700 (PDT) Received: by shiva.localdomain (Postfix, from userid 99) id 64E4088652; Wed, 3 Jul 2013 21:42:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lundgren.jf.intel.com (jfdmzpr02-ext.jf.intel.com [134.134.137.71]) by shiva.localdomain (Postfix) with ESMTPSA id 476C988167; Wed, 3 Jul 2013 21:42:12 +0000 (UTC) From: Ben Widawsky To: Intel GFX Subject: [PATCH 1/6] drm: pre allocate node for create_block Date: Wed, 3 Jul 2013 14:45:21 -0700 Message-Id: <1372887926-1147-1-git-send-email-ben@bwidawsk.net> X-Mailer: git-send-email 1.8.3.2 Cc: Ben Widawsky , dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Virus-Scanned: ClamAV using ClamSMTP For an upcoming patch where we introduce the i915 VMA, it's ideal to have the drm_mm_node as part of the VMA struct (ie. it's pre-allocated). Part of the conversion to VMAs is to kill off obj->gtt_space. Doing this will break a bunch of code, but amongst them are 2 callers of drm_mm_create_block(), both related to stolen memory. It also allows us to embed the drm_mm_node into the object currently which provides a nice transition over to the new code. v2: Reordered to do before ripping out obj->gtt_offset. Some minor cleanups made available because of reordering. CC: Signed-off-by: Ben Widawsky --- drivers/gpu/drm/drm_mm.c | 16 +++++---------- drivers/gpu/drm/i915/i915_gem_gtt.c | 18 +++++++++++++---- drivers/gpu/drm/i915/i915_gem_stolen.c | 36 +++++++++++++++++++++++----------- include/drm/drm_mm.h | 9 +++++---- 4 files changed, 49 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index 07cf99c..9e8dfbc 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -147,12 +147,10 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node, } } -struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, - unsigned long start, - unsigned long size, - bool atomic) +int drm_mm_create_block(struct drm_mm *mm, struct drm_mm_node *node, + unsigned long start, unsigned long size) { - struct drm_mm_node *hole, *node; + struct drm_mm_node *hole; unsigned long end = start + size; unsigned long hole_start; unsigned long hole_end; @@ -161,10 +159,6 @@ struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, if (hole_start > start || hole_end < end) continue; - node = drm_mm_kmalloc(mm, atomic); - if (unlikely(node == NULL)) - return NULL; - node->start = start; node->size = size; node->mm = mm; @@ -184,11 +178,11 @@ struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, node->hole_follows = 1; } - return node; + return 0; } WARN(1, "no hole found for block 0x%lx + 0x%lx\n", start, size); - return NULL; + return -ENOSPC; } EXPORT_SYMBOL(drm_mm_create_block); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 66929ea..5c6fc0e 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -629,14 +629,24 @@ void i915_gem_setup_global_gtt(struct drm_device *dev, /* Mark any preallocated objects as occupied */ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) { + int ret; DRM_DEBUG_KMS("reserving preallocated space: %x + %zx\n", obj->gtt_offset, obj->base.size); BUG_ON(obj->gtt_space != I915_GTT_RESERVED); - obj->gtt_space = drm_mm_create_block(&dev_priv->mm.gtt_space, - obj->gtt_offset, - obj->base.size, - false); + obj->gtt_space = kzalloc(sizeof(*obj->gtt_space), GFP_KERNEL); + if (!obj->gtt_space) { + DRM_ERROR("Failed to preserve all objects\n"); + break; + } + ret = drm_mm_create_block(&dev_priv->mm.gtt_space, + obj->gtt_space, + obj->gtt_offset, + obj->base.size); + if (ret) { + DRM_DEBUG_KMS("Reservation failed\n"); + kfree(obj->gtt_space); + } obj->has_global_gtt_mapping = 1; } diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 8e02344..f9db84a 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -330,6 +330,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; + int ret; if (dev_priv->mm.stolen_base == 0) return NULL; @@ -344,11 +345,15 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, if (WARN_ON(size == 0)) return NULL; - stolen = drm_mm_create_block(&dev_priv->mm.stolen, - stolen_offset, size, - false); - if (stolen == NULL) { + stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); + if (!stolen) + return NULL; + + ret = drm_mm_create_block(&dev_priv->mm.stolen, stolen, stolen_offset, + size); + if (ret) { DRM_DEBUG_KMS("failed to allocate stolen space\n"); + kfree(stolen); return NULL; } @@ -369,13 +374,18 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, * later. */ if (drm_mm_initialized(&dev_priv->mm.gtt_space)) { - obj->gtt_space = drm_mm_create_block(&dev_priv->mm.gtt_space, - gtt_offset, size, - false); - if (obj->gtt_space == NULL) { + obj->gtt_space = kzalloc(sizeof(*obj->gtt_space), GFP_KERNEL); + if (!obj->gtt_space) { + DRM_DEBUG_KMS("-ENOMEM stolen GTT space\n"); + goto unref_out; + } + + ret = drm_mm_create_block(&dev_priv->mm.gtt_space, + obj->gtt_space, + gtt_offset, size); + if (ret) { DRM_DEBUG_KMS("failed to allocate stolen GTT space\n"); - drm_gem_object_unreference(&obj->base); - return NULL; + goto unref_out; } } else obj->gtt_space = I915_GTT_RESERVED; @@ -385,8 +395,12 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, list_add_tail(&obj->global_list, &dev_priv->mm.bound_list); list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list); - return obj; + +unref_out: + drm_gem_object_unreference(&obj->base); + drm_mm_put_block(stolen); + return NULL; } void diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 88591ef..d8b56b7 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -138,10 +138,10 @@ static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node) /* * Basic range manager support (drm_mm.c) */ -extern struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, - unsigned long start, - unsigned long size, - bool atomic); +extern int drm_mm_create_block(struct drm_mm *mm, + struct drm_mm_node *node, + unsigned long start, + unsigned long size); extern struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *node, unsigned long size, unsigned alignment, @@ -155,6 +155,7 @@ extern struct drm_mm_node *drm_mm_get_block_range_generic( unsigned long start, unsigned long end, int atomic); + static inline struct drm_mm_node *drm_mm_get_block(struct drm_mm_node *parent, unsigned long size, unsigned alignment)