From patchwork Wed Dec 9 12:46:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: ankitprasad.r.sharma@intel.com X-Patchwork-Id: 7808391 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8CF719F350 for ; Wed, 9 Dec 2015 13:06:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E6F20203C3 for ; Wed, 9 Dec 2015 13:06:11 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 59C84203E9 for ; Wed, 9 Dec 2015 13:06:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BA31C72115; Wed, 9 Dec 2015 05:06:05 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by gabe.freedesktop.org (Postfix) with ESMTP id 8EA9E72110 for ; Wed, 9 Dec 2015 05:06:02 -0800 (PST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 09 Dec 2015 05:05:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,403,1444719600"; d="scan'208";a="867967995" Received: from ankitprasad-desktop.iind.intel.com ([10.223.82.145]) by orsmga002.jf.intel.com with ESMTP; 09 Dec 2015 05:05:36 -0800 From: ankitprasad.r.sharma@intel.com To: intel-gfx@lists.freedesktop.org Date: Wed, 9 Dec 2015 18:16:19 +0530 Message-Id: <1449665182-10054-4-git-send-email-ankitprasad.r.sharma@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1449665182-10054-1-git-send-email-ankitprasad.r.sharma@intel.com> References: <1449665182-10054-1-git-send-email-ankitprasad.r.sharma@intel.com> Cc: Ankitprasad Sharma , akash.goel@intel.com, shashidhar.hiremath@intel.com Subject: [Intel-gfx] [PATCH 3/6] drm/i915: Propagating correct error codes to the userspace X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ankitprasad Sharma Propagating correct error codes to userspace by using ERR_PTR and PTR_ERR macros for stolen memory based object allocation. We generally return -ENOMEM to the user whenever there is a failure in object allocation. This patch helps user to identify the correct reason for the failure and not just -ENOMEM each time. v2: Moved the patch up in the series, added error propagation for i915_gem_alloc_object too (Chris) v3: Removed storing of error pointer inside structs, Corrected error propagation in caller functions (Chris) v4: Remove assignments inside the predicate (Chris) Signed-off-by: Ankitprasad Sharma --- drivers/gpu/drm/i915/i915_gem.c | 16 +++++----- drivers/gpu/drm/i915/i915_gem_batch_pool.c | 4 +-- drivers/gpu/drm/i915/i915_gem_context.c | 4 +-- drivers/gpu/drm/i915/i915_gem_render_state.c | 7 +++-- drivers/gpu/drm/i915/i915_gem_stolen.c | 43 ++++++++++++++------------ drivers/gpu/drm/i915/i915_guc_submission.c | 45 ++++++++++++++++++---------- drivers/gpu/drm/i915/intel_display.c | 2 +- drivers/gpu/drm/i915/intel_fbdev.c | 6 ++-- drivers/gpu/drm/i915/intel_lrc.c | 10 ++++--- drivers/gpu/drm/i915/intel_overlay.c | 4 +-- drivers/gpu/drm/i915/intel_pm.c | 2 +- drivers/gpu/drm/i915/intel_ringbuffer.c | 21 ++++++------- 12 files changed, 95 insertions(+), 69 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 296e63f..5812748 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -393,9 +393,9 @@ i915_gem_create(struct drm_file *file, if (flags & I915_CREATE_PLACEMENT_STOLEN) { mutex_lock(&dev->struct_mutex); obj = i915_gem_object_create_stolen(dev, size); - if (!obj) { + if (IS_ERR(obj)) { mutex_unlock(&dev->struct_mutex); - return -ENOMEM; + return PTR_ERR(obj); } /* Always clear fresh buffers before handing to userspace */ @@ -411,8 +411,8 @@ i915_gem_create(struct drm_file *file, obj = i915_gem_alloc_object(dev, size); } - if (obj == NULL) - return -ENOMEM; + if (IS_ERR(obj)) + return PTR_ERR(obj); ret = drm_gem_handle_create(file, &obj->base, &handle); /* drop reference from allocate - handle holds it now */ @@ -4399,14 +4399,16 @@ struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev, struct drm_i915_gem_object *obj; struct address_space *mapping; gfp_t mask; + int ret; obj = i915_gem_object_alloc(dev); if (obj == NULL) - return NULL; + return ERR_PTR(-ENOMEM); - if (drm_gem_object_init(dev, &obj->base, size) != 0) { + ret = drm_gem_object_init(dev, &obj->base, size); + if (ret) { i915_gem_object_free(obj); - return NULL; + return ERR_PTR(ret); } mask = GFP_HIGHUSER | __GFP_RECLAIMABLE; diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.c b/drivers/gpu/drm/i915/i915_gem_batch_pool.c index 7bf2f3f..d79caa2 100644 --- a/drivers/gpu/drm/i915/i915_gem_batch_pool.c +++ b/drivers/gpu/drm/i915/i915_gem_batch_pool.c @@ -135,8 +135,8 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool, int ret; obj = i915_gem_alloc_object(pool->dev, size); - if (obj == NULL) - return ERR_PTR(-ENOMEM); + if (IS_ERR(obj)) + return obj; ret = i915_gem_object_get_pages(obj); if (ret) diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 204dc7c..4d24cfc 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -181,8 +181,8 @@ i915_gem_alloc_context_obj(struct drm_device *dev, size_t size) int ret; obj = i915_gem_alloc_object(dev, size); - if (obj == NULL) - return ERR_PTR(-ENOMEM); + if (IS_ERR(obj)) + return obj; /* * Try to make the context utilize L3 as well as LLC. diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c index 5026a62..2bfdd49 100644 --- a/drivers/gpu/drm/i915/i915_gem_render_state.c +++ b/drivers/gpu/drm/i915/i915_gem_render_state.c @@ -58,8 +58,11 @@ static int render_state_init(struct render_state *so, struct drm_device *dev) return -EINVAL; so->obj = i915_gem_alloc_object(dev, 4096); - if (so->obj == NULL) - return -ENOMEM; + if (IS_ERR(so->obj)) { + ret = PTR_ERR(so->obj); + so->obj = NULL; + return ret; + } ret = i915_gem_obj_ggtt_pin(so->obj, 4096, 0); if (ret) diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index b98a3bf..0b0ce11 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -492,6 +492,7 @@ i915_pages_create_for_stolen(struct drm_device *dev, struct drm_i915_private *dev_priv = dev->dev_private; struct sg_table *st; struct scatterlist *sg; + int ret; DRM_DEBUG_DRIVER("offset=0x%x, size=%d\n", offset, size); BUG_ON(offset > dev_priv->gtt.stolen_size - size); @@ -503,11 +504,12 @@ i915_pages_create_for_stolen(struct drm_device *dev, st = kmalloc(sizeof(*st), GFP_KERNEL); if (st == NULL) - return NULL; + return ERR_PTR(-ENOMEM); - if (sg_alloc_table(st, 1, GFP_KERNEL)) { + ret = sg_alloc_table(st, 1, GFP_KERNEL); + if (ret) { kfree(st); - return NULL; + return ERR_PTR(ret); } sg = st->sgl; @@ -556,18 +558,21 @@ _i915_gem_object_create_stolen(struct drm_device *dev, struct drm_mm_node *stolen) { struct drm_i915_gem_object *obj; + int ret = 0; obj = i915_gem_object_alloc(dev); if (obj == NULL) - return NULL; + return ERR_PTR(-ENOMEM); drm_gem_private_object_init(dev, &obj->base, stolen->size); i915_gem_object_init(obj, &i915_gem_object_stolen_ops); obj->pages = i915_pages_create_for_stolen(dev, stolen->start, stolen->size); - if (obj->pages == NULL) + if (IS_ERR(obj->pages)) { + ret = PTR_ERR(obj->pages); goto cleanup; + } i915_gem_object_pin_pages(obj); obj->stolen = stolen; @@ -579,7 +584,7 @@ _i915_gem_object_create_stolen(struct drm_device *dev, cleanup: i915_gem_object_free(obj); - return NULL; + return ERR_PTR(ret); } struct drm_i915_gem_object * @@ -591,29 +596,29 @@ i915_gem_object_create_stolen(struct drm_device *dev, u64 size) int ret; if (!drm_mm_initialized(&dev_priv->mm.stolen)) - return NULL; + return ERR_PTR(-ENODEV); DRM_DEBUG_KMS("creating stolen object: size=%llx\n", size); if (size == 0) - return NULL; + return ERR_PTR(-EINVAL); stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); if (!stolen) - return NULL; + return ERR_PTR(-ENOMEM); ret = i915_gem_stolen_insert_node(dev_priv, stolen, size, 4096); if (ret) { kfree(stolen); - return NULL; + return ERR_PTR(ret); } obj = _i915_gem_object_create_stolen(dev, stolen); - if (obj) + if (!IS_ERR(obj)) return obj; i915_gem_stolen_remove_node(dev_priv, stolen); kfree(stolen); - return NULL; + return obj; } struct drm_i915_gem_object * @@ -630,7 +635,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, int ret; if (!drm_mm_initialized(&dev_priv->mm.stolen)) - return NULL; + return ERR_PTR(-ENODEV); DRM_DEBUG_KMS("creating preallocated stolen object: stolen_offset=%x, gtt_offset=%x, size=%x\n", stolen_offset, gtt_offset, size); @@ -638,11 +643,11 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, /* KISS and expect everything to be page-aligned */ if (WARN_ON(size == 0) || WARN_ON(size & 4095) || WARN_ON(stolen_offset & 4095)) - return NULL; + return ERR_PTR(-EINVAL); stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); if (!stolen) - return NULL; + return ERR_PTR(-ENOMEM); stolen->start = stolen_offset; stolen->size = size; @@ -652,15 +657,15 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, if (ret) { DRM_DEBUG_KMS("failed to allocate stolen space\n"); kfree(stolen); - return NULL; + return ERR_PTR(ret); } obj = _i915_gem_object_create_stolen(dev, stolen); - if (obj == NULL) { + if (IS_ERR(obj)) { DRM_DEBUG_KMS("failed to allocate stolen object\n"); i915_gem_stolen_remove_node(dev_priv, stolen); kfree(stolen); - return NULL; + return obj; } /* Some objects just need physical mem from stolen space */ @@ -698,5 +703,5 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, err: drm_gem_object_unreference(&obj->base); - return NULL; + return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c index 4ac8867..aa38ae4 100644 --- a/drivers/gpu/drm/i915/i915_guc_submission.c +++ b/drivers/gpu/drm/i915/i915_guc_submission.c @@ -645,22 +645,24 @@ int i915_guc_submit(struct i915_guc_client *client, * object needs to be pinned lifetime. Also we must pin it to gtt space other * than [0, GUC_WOPCM_TOP) because this range is reserved inside GuC. * - * Return: A drm_i915_gem_object if successful, otherwise NULL. + * Return: A drm_i915_gem_object if successful, otherwise error pointer. */ static struct drm_i915_gem_object *gem_allocate_guc_obj(struct drm_device *dev, u32 size) { struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_gem_object *obj; + int ret; obj = i915_gem_alloc_object(dev, size); - if (!obj) - return NULL; + if (IS_ERR(obj)) + return obj; - if (i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, - PIN_OFFSET_BIAS | GUC_WOPCM_TOP)) { + ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, + PIN_OFFSET_BIAS | GUC_WOPCM_TOP); + if (ret) { drm_gem_object_unreference(&obj->base); - return NULL; + return ERR_PTR(ret); } /* Invalidate GuC TLB to let GuC take the latest updates to GTT. */ @@ -738,10 +740,11 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev, struct drm_i915_private *dev_priv = dev->dev_private; struct intel_guc *guc = &dev_priv->guc; struct drm_i915_gem_object *obj; + int ret; client = kzalloc(sizeof(*client), GFP_KERNEL); if (!client) - return NULL; + return ERR_PTR(-ENOMEM); client->doorbell_id = GUC_INVALID_DOORBELL_ID; client->priority = priority; @@ -752,13 +755,16 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev, GUC_MAX_GPU_CONTEXTS, GFP_KERNEL); if (client->ctx_index >= GUC_MAX_GPU_CONTEXTS) { client->ctx_index = GUC_INVALID_CTX_ID; + ret = -EINVAL; goto err; } /* The first page is doorbell/proc_desc. Two followed pages are wq. */ obj = gem_allocate_guc_obj(dev, GUC_DB_SIZE + GUC_WQ_SIZE); - if (!obj) + if (IS_ERR(obj)) { + ret = PTR_ERR(obj); goto err; + } client->client_obj = obj; client->wq_offset = GUC_DB_SIZE; @@ -778,9 +784,11 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev, client->proc_desc_offset = (GUC_DB_SIZE / 2); client->doorbell_id = assign_doorbell(guc, client->priority); - if (client->doorbell_id == GUC_INVALID_DOORBELL_ID) + if (client->doorbell_id == GUC_INVALID_DOORBELL_ID) { /* XXX: evict a doorbell instead */ + ret = -EINVAL; goto err; + } guc_init_proc_desc(guc, client); guc_init_ctx_desc(guc, client); @@ -788,7 +796,8 @@ static struct i915_guc_client *guc_client_alloc(struct drm_device *dev, /* XXX: Any cache flushes needed? General domain mgmt calls? */ - if (host2guc_allocate_doorbell(guc, client)) + ret = host2guc_allocate_doorbell(guc, client); + if (ret) goto err; DRM_DEBUG_DRIVER("new priority %u client %p: ctx_index %u db_id %u\n", @@ -800,7 +809,7 @@ err: DRM_ERROR("FAILED to create priority %u GuC client!\n", priority); guc_client_free(dev, client); - return NULL; + return ERR_PTR(ret); } static void guc_create_log(struct intel_guc *guc) @@ -825,7 +834,7 @@ static void guc_create_log(struct intel_guc *guc) obj = guc->log_obj; if (!obj) { obj = gem_allocate_guc_obj(dev_priv->dev, size); - if (!obj) { + if (IS_ERR(obj)) { /* logging will be off */ i915.guc_log_level = -1; return; @@ -855,6 +864,7 @@ int i915_guc_submission_init(struct drm_device *dev) const size_t poolsize = GUC_MAX_GPU_CONTEXTS * ctxsize; const size_t gemsize = round_up(poolsize, PAGE_SIZE); struct intel_guc *guc = &dev_priv->guc; + int ret = 0; if (!i915.enable_guc_submission) return 0; /* not enabled */ @@ -863,8 +873,11 @@ int i915_guc_submission_init(struct drm_device *dev) return 0; /* already allocated */ guc->ctx_pool_obj = gem_allocate_guc_obj(dev_priv->dev, gemsize); - if (!guc->ctx_pool_obj) - return -ENOMEM; + if (IS_ERR(guc->ctx_pool_obj)) { + ret = PTR_ERR(guc->ctx_pool_obj); + guc->ctx_pool_obj = NULL; + return ret; + } spin_lock_init(&dev_priv->guc.host2guc_lock); @@ -884,9 +897,9 @@ int i915_guc_submission_enable(struct drm_device *dev) /* client for execbuf submission */ client = guc_client_alloc(dev, GUC_CTX_PRIORITY_KMD_NORMAL, ctx); - if (!client) { + if (IS_ERR(client)) { DRM_ERROR("Failed to create execbuf guc_client\n"); - return -ENOMEM; + return PTR_ERR(client); } guc->execbuf_client = client; diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 77979ed..f281e0b 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -2546,7 +2546,7 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc, base_aligned, base_aligned, size_aligned); - if (!obj) + if (IS_ERR(obj)) return false; obj->tiling_mode = plane_config->tiling; diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c index 840d6bf..f43681e 100644 --- a/drivers/gpu/drm/i915/intel_fbdev.c +++ b/drivers/gpu/drm/i915/intel_fbdev.c @@ -146,11 +146,11 @@ static int intelfb_alloc(struct drm_fb_helper *helper, * features. */ if (size * 2 < dev_priv->gtt.stolen_usable_size) obj = i915_gem_object_create_stolen(dev, size); - if (obj == NULL) + if (IS_ERR_OR_NULL(obj)) obj = i915_gem_alloc_object(dev, size); - if (!obj) { + if (IS_ERR(obj)) { DRM_ERROR("failed to allocate framebuffer\n"); - ret = -ENOMEM; + ret = PTR_ERR(obj); goto out; } diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 06180dc..4539cc6 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -1364,9 +1364,11 @@ static int lrc_setup_wa_ctx_obj(struct intel_engine_cs *ring, u32 size) int ret; ring->wa_ctx.obj = i915_gem_alloc_object(ring->dev, PAGE_ALIGN(size)); - if (!ring->wa_ctx.obj) { + if (IS_ERR(ring->wa_ctx.obj)) { DRM_DEBUG_DRIVER("alloc LRC WA ctx backing obj failed.\n"); - return -ENOMEM; + ret = PTR_ERR(ring->wa_ctx.obj); + ring->wa_ctx.obj = NULL; + return ret; } ret = i915_gem_obj_ggtt_pin(ring->wa_ctx.obj, PAGE_SIZE, 0); @@ -2471,9 +2473,9 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx, context_size += PAGE_SIZE * LRC_PPHWSP_PN; ctx_obj = i915_gem_alloc_object(dev, context_size); - if (!ctx_obj) { + if (IS_ERR(ctx_obj)) { DRM_DEBUG_DRIVER("Alloc LRC backing obj failed.\n"); - return -ENOMEM; + return PTR_ERR(ctx_obj); } ringbuf = intel_engine_create_ringbuffer(ring, 4 * PAGE_SIZE); diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c index 76f1980..3a65858 100644 --- a/drivers/gpu/drm/i915/intel_overlay.c +++ b/drivers/gpu/drm/i915/intel_overlay.c @@ -1392,9 +1392,9 @@ void intel_setup_overlay(struct drm_device *dev) reg_bo = NULL; if (!OVERLAY_NEEDS_PHYSICAL(dev)) reg_bo = i915_gem_object_create_stolen(dev, PAGE_SIZE); - if (reg_bo == NULL) + if (IS_ERR_OR_NULL(reg_bo)) reg_bo = i915_gem_alloc_object(dev, PAGE_SIZE); - if (reg_bo == NULL) + if (IS_ERR(reg_bo)) goto out_free; overlay->reg_bo = reg_bo; diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 647c0ff..6dee908 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -5172,7 +5172,7 @@ static void valleyview_setup_pctx(struct drm_device *dev) * memory, or any other relevant ranges. */ pctx = i915_gem_object_create_stolen(dev, pctx_size); - if (!pctx) { + if (IS_ERR(pctx)) { DRM_DEBUG("not enough stolen space for PCTX, disabling\n"); return; } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index c9b081f..5eabaf6 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -678,9 +678,10 @@ intel_init_pipe_control(struct intel_engine_cs *ring) WARN_ON(ring->scratch.obj); ring->scratch.obj = i915_gem_alloc_object(ring->dev, 4096); - if (ring->scratch.obj == NULL) { + if (IS_ERR(ring->scratch.obj)) { DRM_ERROR("Failed to allocate seqno page\n"); - ret = -ENOMEM; + ret = PTR_ERR(ring->scratch.obj); + ring->scratch.obj = NULL; goto err; } @@ -1935,9 +1936,9 @@ static int init_status_page(struct intel_engine_cs *ring) int ret; obj = i915_gem_alloc_object(ring->dev, 4096); - if (obj == NULL) { + if (IS_ERR(obj)) { DRM_ERROR("Failed to allocate status page\n"); - return -ENOMEM; + return PTR_ERR(obj); } ret = i915_gem_object_set_cache_level(obj, I915_CACHE_LLC); @@ -2084,10 +2085,10 @@ static int intel_alloc_ringbuffer_obj(struct drm_device *dev, obj = NULL; if (!HAS_LLC(dev)) obj = i915_gem_object_create_stolen(dev, ringbuf->size); - if (obj == NULL) + if (IS_ERR_OR_NULL(obj)) obj = i915_gem_alloc_object(dev, ringbuf->size); - if (obj == NULL) - return -ENOMEM; + if (IS_ERR(obj)) + return PTR_ERR(obj); /* mark ring buffers as read-only from GPU side by default */ obj->gt_ro = 1; @@ -2678,7 +2679,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev) if (INTEL_INFO(dev)->gen >= 8) { if (i915_semaphore_is_enabled(dev)) { obj = i915_gem_alloc_object(dev, 4096); - if (obj == NULL) { + if (IS_ERR(obj)) { DRM_ERROR("Failed to allocate semaphore bo. Disabling semaphores\n"); i915.semaphores = 0; } else { @@ -2785,9 +2786,9 @@ int intel_init_render_ring_buffer(struct drm_device *dev) /* Workaround batchbuffer to combat CS tlb bug. */ if (HAS_BROKEN_CS_TLB(dev)) { obj = i915_gem_alloc_object(dev, I830_WA_SIZE); - if (obj == NULL) { + if (IS_ERR(obj)) { DRM_ERROR("Failed to allocate batch bo\n"); - return -ENOMEM; + return PTR_ERR(obj); } ret = i915_gem_obj_ggtt_pin(obj, 0, 0);