From patchwork Wed Nov 28 11:25:42 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1815741 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id E0F86DF26F for ; Wed, 28 Nov 2012 11:29:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D628EE63A0 for ; Wed, 28 Nov 2012 03:29:34 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-we0-f177.google.com (mail-we0-f177.google.com [74.125.82.177]) by gabe.freedesktop.org (Postfix) with ESMTP id 32512E6399 for ; Wed, 28 Nov 2012 03:26:42 -0800 (PST) Received: by mail-we0-f177.google.com with SMTP id x48so4807675wey.36 for ; Wed, 28 Nov 2012 03:26:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=fi1jbI7/xEudA1xKNpjgYCBaqaZQDSH3HP/iA/0botQ=; b=SRFD/uKcS7VQGr3oMpZM8xCIG1DoeskQgUls8UzGNu97Uas26q87tQ5Vv+qwBUH4fK Ea9lY/3wvRRg4ao6GIVzPs1Vj7ihTHz5qaub6hyumXlZ3uehNz/aCk16GaDqcKKQPiJI CBF6dN/Be3FP6lUAPA/MahhaoxVjDY04mElWb0j7tSnbD50/Sti/Q0s+3KiebFvxD8RI r2W+gAsUkvko4dR2iu3pfKq0EQPIIr+T2Gv02eR6wvtK5zVmqt+IQx0DzVbwGF9PTBMV fK4LVUernUt6hzPQ2yEncdhcTlLpryOpoyBflnPakwe3tT2MadLlEu8qeo8HmU5AJ6/N 35bg== Received: by 10.216.207.139 with SMTP id n11mr7218767weo.211.1354102001716; Wed, 28 Nov 2012 03:26:41 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id y3sm6591972wix.6.2012.11.28.03.26.29 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 28 Nov 2012 03:26:39 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Wed, 28 Nov 2012 12:26:29 +0100 From: Maarten Lankhorst To: thellstrom@vmware.com, dri-devel@lists.freedesktop.org Subject: [PATCH 4/6] drm/ttm: cope with reserved buffers on swap list in ttm_bo_swapout, v2 Date: Wed, 28 Nov 2012 12:25:42 +0100 Message-Id: <1354101944-10455-4-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 In-Reply-To: <1354101944-10455-1-git-send-email-maarten.lankhorst@canonical.com> References: <1354101944-10455-1-git-send-email-maarten.lankhorst@canonical.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Replace the while loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_bo.c | 44 ++++++++++++++------------------------------ 1 file changed, 14 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 02b275b..74b296f 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1795,41 +1795,25 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink) uint32_t swap_placement = (TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM); spin_lock(&glob->lru_lock); - while (ret == -EBUSY) { - if (unlikely(list_empty(&glob->swap_lru))) { - spin_unlock(&glob->lru_lock); - return -EBUSY; - } - - bo = list_first_entry(&glob->swap_lru, - struct ttm_buffer_object, swap); - kref_get(&bo->list_kref); - - if (!list_empty(&bo->ddestroy)) { - ttm_bo_reserve_locked(bo, false, false, false, 0); - ttm_bo_cleanup_refs_and_unlock(bo, false, false); + list_for_each_entry(bo, &glob->swap_lru, swap) { + ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + if (!ret) + break; + } - kref_put(&bo->list_kref, ttm_bo_release_list); - spin_lock(&glob->lru_lock); - continue; - } + if (ret) { + spin_unlock(&glob->lru_lock); + return ret; + } - /** - * Reserve buffer. Since we unlock while sleeping, we need - * to re-check that nobody removed us from the swap-list while - * we slept. - */ + kref_get(&bo->list_kref); - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); - if (unlikely(ret == -EBUSY)) { - spin_unlock(&glob->lru_lock); - ttm_bo_wait_unreserved(bo, false); - kref_put(&bo->list_kref, ttm_bo_release_list); - spin_lock(&glob->lru_lock); - } + if (!list_empty(&bo->ddestroy)) { + ret = ttm_bo_cleanup_refs_and_unlock(bo, false, false); + kref_put(&bo->list_kref, ttm_bo_release_list); + return ret; } - BUG_ON(ret != 0); put_count = ttm_bo_del_from_lru(bo); spin_unlock(&glob->lru_lock);