From patchwork Mon Feb 4 18:34:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 2094501 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 25EE53FC23 for ; Mon, 4 Feb 2013 18:38:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 05D35E5F60 for ; Mon, 4 Feb 2013 10:38:08 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by gabe.freedesktop.org (Postfix) with ESMTP id 4E257E5C60 for ; Mon, 4 Feb 2013 10:37:52 -0800 (PST) Received: by mail-vc0-f175.google.com with SMTP id fw7so4115342vcb.34 for ; Mon, 04 Feb 2013 10:37:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=UlrVePlUeD+LLP3HG8/TvqOW6xBlHjMMjySzNqle4LM=; b=jsErgm9dembzu4N2v4WYEqXXKNJ+YtIDAP8DxJ0bxE6ZgWd3k1XAWUgobYTuEM4EV/ puecBEip2xNaOYKFm6rvD5ylpYOv0hoOG4mwWpiHyjthM+dOIzQcN1g7x8Vg5khE8ptw ahEKbYP5e/6uaFR+XE7CaSCa2q5H/iWIHdELbq2O1uShIbpE2Ws1Drh9bHc6cAOG+gEQ dpp9MyH2V+OTrx2lK/yE1JkQbIGM4gpU+B9aPnwPJTxQdlX5Sa7HSfw75JqBIz8mukj6 QjonvANjKn6Qu6eJuYcmQDbvr+RdwgsgK9/X0ztFs7IyHgZipVkHjwmbDYbM4cMFdiDY I+Rw== X-Received: by 10.220.157.18 with SMTP id z18mr23671698vcw.72.1360003071728; Mon, 04 Feb 2013 10:37:51 -0800 (PST) Received: from homer.localdomain.com ([66.187.233.206]) by mx.google.com with ESMTPS id cd16sm22994532vdb.0.2013.02.04.10.37.50 (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 04 Feb 2013 10:37:51 -0800 (PST) From: j.glisse@gmail.com To: dri-devel@lists.freedesktop.org Subject: [PATCH] drm/ttm: avoid allocation memory while spinlock is held v2 Date: Mon, 4 Feb 2013 13:34:32 -0500 Message-Id: <1360002872-17224-1-git-send-email-j.glisse@gmail.com> X-Mailer: git-send-email 1.7.10.4 Cc: Jerome Glisse X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org From: Jerome Glisse We need to take reference on the sync object while holding the fence spinlock but at the same time we don't want to allocate memory while holding the spinlock. This patch make sure we enforce both of this constraint. v2: actually test build it Fix https://bugzilla.redhat.com/show_bug.cgi?id=906296 Signed-off-by: Jerome Glisse --- drivers/gpu/drm/ttm/ttm_bo_util.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 44420fc..f4b7acd 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -413,6 +413,8 @@ static void ttm_transfered_destroy(struct ttm_buffer_object *bo) * @bo: A pointer to a struct ttm_buffer_object. * @new_obj: A pointer to a pointer to a newly created ttm_buffer_object, * holding the data of @bo with the old placement. + * @sync_obj: the sync object caller is responsible to take a reference on + * behalf of this function * * This is a utility function that may be called after an accelerated move * has been scheduled. A new buffer object is created as a placeholder for @@ -423,11 +425,11 @@ static void ttm_transfered_destroy(struct ttm_buffer_object *bo) */ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, - struct ttm_buffer_object **new_obj) + struct ttm_buffer_object **new_obj, + void *sync_obj) { struct ttm_buffer_object *fbo; struct ttm_bo_device *bdev = bo->bdev; - struct ttm_bo_driver *driver = bdev->driver; fbo = kzalloc(sizeof(*fbo), GFP_KERNEL); if (!fbo) @@ -448,7 +450,8 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo, fbo->vm_node = NULL; atomic_set(&fbo->cpu_writers, 0); - fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj); + /* reference on sync obj is taken by the caller of this function */ + fbo->sync_obj = sync_obj; kref_init(&fbo->list_kref); kref_init(&fbo->kref); fbo->destroy = &ttm_transfered_destroy; @@ -652,6 +655,8 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, } ttm_bo_free_old_node(bo); } else { + void *sync_obj; + /** * This should help pipeline ordinary buffer moves. * @@ -662,12 +667,14 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, set_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags); - /* ttm_buffer_object_transfer accesses bo->sync_obj */ - ret = ttm_buffer_object_transfer(bo, &ghost_obj); + /* take the ref on the sync object before releasing the spinlock */ + sync_obj = driver->sync_obj_ref(bo->sync_obj); spin_unlock(&bdev->fence_lock); + if (tmp_obj) driver->sync_obj_unref(&tmp_obj); + ret = ttm_buffer_object_transfer(bo, &ghost_obj, sync_obj); if (ret) return ret;