From patchwork Fri Nov 30 12:12:57 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1825261 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id 1D769DF24C for ; Fri, 30 Nov 2012 12:15:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1E2D8E6788 for ; Fri, 30 Nov 2012 04:15:36 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wg0-f45.google.com (mail-wg0-f45.google.com [74.125.82.45]) by gabe.freedesktop.org (Postfix) with ESMTP id 98EFAE6752 for ; Fri, 30 Nov 2012 04:13:23 -0800 (PST) Received: by mail-wg0-f45.google.com with SMTP id dq12so139597wgb.12 for ; Fri, 30 Nov 2012 04:13:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=loY7IjkBloHr4Yd9guFt+istIADkou3CtMfMTqYw5NY=; b=exMCB8bmYyfVKcM4YIGDYJz3yUk3aKx99ZW/He/dIWLeyA87ibRvSWfC+c2BOfrAtk v2i0VkiNfAZYjXqPTaAtGTYZds7V7W2dodqDHn2PGyXQm9hlqgj9/WV3jsGh1cqvKsk6 SmwlGdZuEdlkJBxhZZOWliNdBMW+aVfR2rhXNuzkQj607O+hmQUQ/l6tbiz3T1f7xvEJ DSWVmbjhA2Jy0I7yLfZ3T0fqA3Zuk+uf21rCVHY2dYxfYRuL8rvp9L6KKiAuBT93C1rn Zmd1boZ3CuHmmIsHxGcwXeGPsFS2oLbs9N5ByRvOpoVStaxwqs8UoLGJV4SEEspxJvQ1 P3iQ== Received: by 10.180.87.40 with SMTP id u8mr43518086wiz.3.1354277603186; Fri, 30 Nov 2012 04:13:23 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id eu8sm15023283wib.1.2012.11.30.04.13.15 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 30 Nov 2012 04:13:22 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Fri, 30 Nov 2012 13:13:15 +0100 From: Maarten Lankhorst To: dri-devel@lists.freedesktop.org, thellstrom@vmware.com Subject: [PATCH 3/6] drm/ttm: add ttm_bo_reserve_slowpath Date: Fri, 30 Nov 2012 13:12:57 +0100 Message-Id: <1354277580-17958-3-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 In-Reply-To: <1354277580-17958-1-git-send-email-maarten.lankhorst@canonical.com> References: <50B8A254.904@canonical.com> <1354277580-17958-1-git-send-email-maarten.lankhorst@canonical.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Instead of dropping everything, waiting for the bo to be unreserved and trying over, a better strategy would be to do a blocking wait. This can be mapped a lot better to a mutex_lock-like call. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/ttm/ttm_bo.c | 47 +++++++++++++++++++++++++++++++++++++++++ include/drm/ttm/ttm_bo_driver.h | 30 ++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 61b5cd0..174b325 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -310,6 +310,53 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo, return ret; } +int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo, + bool interruptible, uint32_t sequence) +{ + bool wake_up = false; + int ret; + + while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) { + WARN_ON(bo->seq_valid && sequence == bo->val_seq); + + ret = ttm_bo_wait_unreserved(bo, interruptible); + + if (unlikely(ret)) + return ret; + } + + if ((bo->val_seq - sequence < (1 << 31)) || !bo->seq_valid) + wake_up = true; + + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + bo->val_seq = sequence; + bo->seq_valid = true; + if (wake_up) + wake_up_all(&bo->event_queue); + + return 0; +} + +int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo, + bool interruptible, uint32_t sequence) +{ + struct ttm_bo_global *glob = bo->glob; + int put_count, ret; + + ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, sequence); + if (likely(!ret)) { + spin_lock(&glob->lru_lock); + put_count = ttm_bo_del_from_lru(bo); + spin_unlock(&glob->lru_lock); + ttm_bo_list_ref_sub(bo, put_count, true); + } + return ret; +} +EXPORT_SYMBOL(ttm_bo_reserve_slowpath); + void ttm_bo_unreserve_locked(struct ttm_buffer_object *bo) { ttm_bo_add_to_lru(bo); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 6fff432..5af71af 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -821,6 +821,36 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo, bool interruptible, bool no_wait, bool use_sequence, uint32_t sequence); +/** + * ttm_bo_reserve_slowpath_nolru: + * @bo: A pointer to a struct ttm_buffer_object. + * @interruptible: Sleep interruptible if waiting. + * @sequence: Set (@bo)->sequence to this value after lock + * + * This is called after ttm_bo_reserve returns -EAGAIN and we backed off + * from all our other reservations. Because there are no other reservations + * held by us, this function cannot deadlock any more. + * + * Will not remove reserved buffers from the lru lists. + * Otherwise identical to ttm_bo_reserve_slowpath. + */ +extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo, + bool interruptible, + uint32_t sequence); + + +/** + * ttm_bo_reserve_slowpath: + * @bo: A pointer to a struct ttm_buffer_object. + * @interruptible: Sleep interruptible if waiting. + * @sequence: Set (@bo)->sequence to this value after lock + * + * This is called after ttm_bo_reserve returns -EAGAIN and we backed off + * from all our other reservations. Because there are no other reservations + * held by us, this function cannot deadlock any more. + */ +extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo, + bool interruptible, uint32_t sequence); /** * ttm_bo_reserve_nolru: