From patchwork Wed Dec 19 12:08:35 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 1895381 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 1D5D13FC64 for ; Wed, 19 Dec 2012 11:50:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EF8C9E5E55 for ; Wed, 19 Dec 2012 03:50:14 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-ee0-f45.google.com (mail-ee0-f45.google.com [74.125.83.45]) by gabe.freedesktop.org (Postfix) with ESMTP id EF37CE5C76 for ; Wed, 19 Dec 2012 03:50:02 -0800 (PST) Received: by mail-ee0-f45.google.com with SMTP id d49so940157eek.4 for ; Wed, 19 Dec 2012 03:50:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=OuyFaECS7smVvZvnKvkkXas6Bv4sDFdvI3psWoTrnRo=; b=e+GhSeevrRb7Tg823os10FyIssABV1puOt2oi7aNJy/3R7RfUDnswDtQT8a3J8pJGs YvodoXBXSsdYOMr8V/Cqsyo7WSHJvkC0bGux7vFvGGZJSwXOoO75htBcrTjqhemP/YSf pjKYU6QxMVEqkh+mBfabe3OHAkkAsEZeZ1/yc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer :x-gm-message-state; bh=OuyFaECS7smVvZvnKvkkXas6Bv4sDFdvI3psWoTrnRo=; b=W44B5UaVQ15GY58Ja82cRzxcvy0sexRDeC3rz/3AR1mmzbsAHc9/egaOw/WLP6wdrO oEcDZCDJ6W97t2NmChl54b8s/p1ziU0hro8yEDKyrmS/MJBxBcZvS0qPT0I1WdZRAGem 1PV3rHikVizNFWRnamIAYNsZpIAxWh8d/LDdssw9VuN0SVd9v6xHK91kvZ+FSozvHLJI SanK7ehRqdmf08kKAs3ZGRqVlEFaFtAUrs8eRTTQjSnTuCXTMUhSBn9AlnWbFVWb9ysa Eqwvl+F5/PeqGQJpRne4APK3X7zcLEyZbV/DdfRve71fp3Jw+eGDUSmHoCBfhC33Uogd icYg== X-Received: by 10.14.225.4 with SMTP id y4mr13677325eep.6.1355917801900; Wed, 19 Dec 2012 03:50:01 -0800 (PST) Received: from hummel.ffwll.local (178-83-130-250.dynamic.hispeed.ch. [178.83.130.250]) by mx.google.com with ESMTPS id d3sm8777307eeo.13.2012.12.19.03.50.00 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 19 Dec 2012 03:50:01 -0800 (PST) From: Daniel Vetter To: Intel Graphics Development Date: Wed, 19 Dec 2012 13:08:35 +0100 Message-Id: <1355918915-12938-1-git-send-email-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 1.7.11.7 X-Gm-Message-State: ALoCoQnoMrkrmYvkC1HhZDS5MUimut+zVrfUa7xzAWhC1N8o6B5ZmcyqzMAOVn1or/UGOJRORStR Cc: Daniel Vetter Subject: [Intel-gfx] [PATCH] drm/i915: optionally disable shrinker lock stealing X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+patchwork-intel-gfx=patchwork.kernel.org@lists.freedesktop.org commit 5774506f157a91400c587b85d1ce4de56f0d32f6 Author: Chris Wilson Date: Wed Nov 21 13:04:04 2012 +0000 drm/i915: Borrow our struct_mutex for the direct reclaim added a nice trick to steal the struct_mutex lock in the shrinker if it's the current task holding it. But this also caused the requirement that every place which allocates memory needs to be careful about the gem state of objects, since the shrinker could have pulled the rug out from under it. We've usually solved this by carefully preallocating things or ensure that buffers are pinned already. But the shrinker also reaps mmap offset, so allocating those needs to be careful, too. Now that code has been factored out into some common helpers, so either we have fragile code depending upon the common helper not doing something we don't want it to do. Or we need to reimplement the mmap offset creation and so also leak implementation details into our code. Since this all results in leaky abstraction, cop out by disabling the lock borrowing trick while calling down into the helpers. That way our craziness is nicely confined to files in drm/i915. This should fix igt/gem_tiled_swapping. Reported-by: Mika Kuoppala Cc: Chris Wilson Cc: Mika Kuoppala Signed-off-by: Daniel Vetter --- drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/i915_gem.c | 16 +++++++++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 2ab476d..87747da 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -807,6 +807,7 @@ typedef struct drm_i915_private { struct i915_hw_ppgtt *aliasing_ppgtt; struct shrinker inactive_shrinker; + bool shrinker_no_lock_stealing; /** * List of objects currently involved in rendering. diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 9530592..4d3605c 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1514,9 +1514,11 @@ static int i915_gem_object_create_mmap_offset(struct drm_i915_gem_object *obj) if (obj->base.map_list.map) return 0; + dev_priv->mm.shrinker_no_lock_stealing = true; + ret = drm_gem_create_mmap_offset(&obj->base); if (ret != -ENOSPC) - return ret; + goto out; /* Badly fragmented mmap space? The only way we can recover * space is by destroying unwanted objects. We can't randomly release @@ -1528,10 +1530,15 @@ static int i915_gem_object_create_mmap_offset(struct drm_i915_gem_object *obj) i915_gem_purge(dev_priv, obj->base.size >> PAGE_SHIFT); ret = drm_gem_create_mmap_offset(&obj->base); if (ret != -ENOSPC) - return ret; + goto out; i915_gem_shrink_all(dev_priv); - return drm_gem_create_mmap_offset(&obj->base); + ret = drm_gem_create_mmap_offset(&obj->base); + + dev_priv->mm.shrinker_no_lock_stealing = false; + +out: + return ret; } static void i915_gem_object_free_mmap_offset(struct drm_i915_gem_object *obj) @@ -4400,6 +4407,9 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc) if (!mutex_is_locked_by(&dev->struct_mutex, current)) return 0; + if (dev_priv->mm.shrinker_no_lock_stealing) + return 0; + unlock = false; }