From patchwork Fri Nov 30 12:12:58 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1825271 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 6B1CF3FC23 for ; Fri, 30 Nov 2012 12:16:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 69C00E6772 for ; Fri, 30 Nov 2012 04:16:34 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) by gabe.freedesktop.org (Postfix) with ESMTP id 90EE8E6783 for ; Fri, 30 Nov 2012 04:13:32 -0800 (PST) Received: by mail-wi0-f173.google.com with SMTP id hn17so5167916wib.12 for ; Fri, 30 Nov 2012 04:13:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=bxzY/5fXJ2/VPf/FuECSKcCQ9ZVBMLP/AQpVDttH1bQ=; b=H3srBuU1uwAEAKLuCQGXa/eYhkDKdou9nhAd+JG29UtCKOQ02WhflEd4sxPe+zNTnL 1WIvPtv0WTW6j6/wwAsEStFUYGXcesrWYKmnKysvonpSJLM0I9EsaJv+3y9rFBSkMWRz jCV3S8z1DG0Wg0rqn5nscf3uVXjmw+xuvm3Y9z3nQxdV36nNeR8MMRhvXIVoI4Qcj2G8 e465dSnb1RmLP6K55EXQQP/12fSgJOl3qeLMdnMEqcqbNsijOW3Dfv3ePOEh5ZEUSFCl zgwDF4UUBVLr7QE0k6EEGdTgvphRyoetuV6mq9vVttsr1wXz/a2Zq//bMx73Gs2iDOoa Yk/w== Received: by 10.181.11.234 with SMTP id el10mr15451178wid.7.1354277611720; Fri, 30 Nov 2012 04:13:31 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id e6sm15019467wiy.4.2012.11.30.04.13.23 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 30 Nov 2012 04:13:30 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Fri, 30 Nov 2012 13:13:23 +0100 From: Maarten Lankhorst To: dri-devel@lists.freedesktop.org, thellstrom@vmware.com Subject: [PATCH 4/6] drm/ttm: use ttm_bo_reserve_slowpath_nolru in ttm_eu_reserve_buffers Date: Fri, 30 Nov 2012 13:12:58 +0100 Message-Id: <1354277580-17958-4-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 In-Reply-To: <1354277580-17958-1-git-send-email-maarten.lankhorst@canonical.com> References: <50B8A254.904@canonical.com> <1354277580-17958-1-git-send-email-maarten.lankhorst@canonical.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org This requires re-use of the seqno, which increases fairness slightly. Instead of spinning with a new seqno every time we keep the current one, but still drop all other reservations we hold. Only when we succeed, we try to get back our other reservations again. This should increase fairness slightly as well. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index c7d3236..c02b2b6 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -129,13 +129,17 @@ int ttm_eu_reserve_buffers(struct list_head *list) entry = list_first_entry(list, struct ttm_validate_buffer, head); glob = entry->bo->glob; -retry: spin_lock(&glob->lru_lock); val_seq = entry->bo->bdev->val_seq++; +retry: list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; + /* already slowpath reserved? */ + if (entry->reserved) + continue; + ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq); switch (ret) { case 0: @@ -157,9 +161,15 @@ retry: ttm_eu_backoff_reservation_locked(list); spin_unlock(&glob->lru_lock); ttm_eu_list_ref_sub(list); - ret = ttm_bo_wait_unreserved(bo, true); + ret = ttm_bo_reserve_slowpath_nolru(bo, true, val_seq); if (unlikely(ret != 0)) return ret; + spin_lock(&glob->lru_lock); + entry->reserved = true; + if (unlikely(atomic_read(&bo->cpu_writers) > 0)) { + ret = -EBUSY; + goto err; + } goto retry; default: goto err;