From patchwork Wed Jul 20 10:16:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Janusz Krzysztofik X-Patchwork-Id: 12923768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7B94C433EF for ; Wed, 20 Jul 2022 10:17:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E73D611A831; Wed, 20 Jul 2022 10:17:20 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 982B311A831; Wed, 20 Jul 2022 10:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658312239; x=1689848239; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=kbIA9F2nesf8cCzHMeeWiZJ3SXBcx+oFl6bHtEenIGE=; b=P3LSwuF7n1LCcCqertIGfhrD6dTEkiyazcj9wIsmuwZ0zrxb0QhyOks1 /xLvd0hubCIovSeB7VDJqoozCjKKqMU0Bjo4hd1RCz7oKnd5uk/ySp8r6 8kHLL1spQafGQG/dGUf3AJ6frCYIV1rWRfzjJu+sxOfJyjWpJOGo0rmSm 4GPWNROlBaec9HNU5UJjcorVNrvvndfhSm+jXl8j1y6Zr82Lc0WlaGqOr Wz6DfqrA8QlECqDhY2x8LhjSMq0EM0DOn+PVc1jaJirjp8VkPs2cTxBJ0 TLTsOqgNLzo/TOxIL1wYOZVqIZOOgaGWdriQHo9O1s/ji0pig7ertWBTH g==; X-IronPort-AV: E=McAfee;i="6400,9594,10413"; a="269764425" X-IronPort-AV: E=Sophos;i="5.92,286,1650956400"; d="scan'208";a="269764425" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2022 03:17:19 -0700 X-IronPort-AV: E=Sophos;i="5.92,286,1650956400"; d="scan'208";a="625604475" Received: from jkrzyszt-mobl1.ger.corp.intel.com ([10.213.21.190]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2022 03:17:14 -0700 From: Janusz Krzysztofik To: intel-gfx@lists.freedesktop.org Subject: [PATCH 1/2] drm/i915/gem: Avoid taking runtime-pm under the shrinker Date: Wed, 20 Jul 2022 12:16:51 +0200 Message-Id: <20220720101652.93293-1-janusz.krzysztofik@linux.intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Tvrtko Ursulin , dri-devel@lists.freedesktop.org, Chris Wilson , Matthew Auld , Janusz Krzysztofik Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson Inside the shrinker, we cannot wake the device as that may cause recursion into fs-reclaim, so instead we only unbind vma if the device is currently awake. (In order to provide reclaim while asleep, we do wake the device up during kswapd -- we probably want to limit that wake up if we have anything to shrink though!) To avoid the same fs_reclaim recursion potential during i915_gem_object_unbind, we acquire a wakeref there, see commit 3e817471a34c ("drm/i915/gem: Take runtime-pm wakeref prior to unbinding"). However, we use i915_gem_object_unbind from the shrinker path to make the object available for shrinking and so we must make the wakeref acquisition here conditional. <4> [437.542172] ====================================================== <4> [437.542174] WARNING: possible circular locking dependency detected <4> [437.542176] 5.19.0-rc6-CI_DRM_11876-g2305e0d00665+ #1 Tainted: G U <4> [437.542179] ------------------------------------------------------ <4> [437.542181] kswapd0/93 is trying to acquire lock: <4> [437.542183] ffffffff827a7608 (acpi_wakeup_lock){+.+.}-{3:3}, at: acpi_device_wakeup_disable+0x12/0x50 <4> [437.542191] but task is already holding lock: <4> [437.542194] ffffffff8275d360 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x91/0x5c0 <4> [437.542199] which lock already depends on the new lock. <4> [437.542202] the existing dependency chain (in reverse order) is: <4> [437.542204] -> #2 (fs_reclaim){+.+.}-{0:0}: <4> [437.542207] fs_reclaim_acquire+0x9d/0xd0 <4> [437.542211] kmem_cache_alloc_trace+0x2a/0x250 <4> [437.542214] __acpi_device_add+0x263/0x3a0 <4> [437.542217] acpi_add_single_object+0x3ea/0x710 <4> [437.542220] acpi_bus_check_add+0xf7/0x240 <4> [437.542222] acpi_bus_scan+0x34/0xf0 <4> [437.542224] acpi_scan_init+0xf5/0x241 <4> [437.542228] acpi_init+0x449/0x4aa <4> [437.542230] do_one_initcall+0x53/0x2e0 <4> [437.542233] kernel_init_freeable+0x18f/0x1dd <4> [437.542236] kernel_init+0x11/0x110 <4> [437.542239] ret_from_fork+0x1f/0x30 <4> [437.542241] -> #1 (acpi_device_lock){+.+.}-{3:3}: <4> [437.542245] __mutex_lock+0x97/0xf20 <4> [437.542246] acpi_enable_wakeup_device_power+0x30/0xf0 <4> [437.542249] __acpi_device_wakeup_enable+0x31/0x110 <4> [437.542252] acpi_pm_set_device_wakeup+0x55/0x100 <4> [437.542254] __pci_enable_wake+0x5e/0xa0 <4> [437.542257] pci_finish_runtime_suspend+0x32/0x70 <4> [437.542259] pci_pm_runtime_suspend+0xa3/0x160 <4> [437.542262] __rpm_callback+0x3d/0x110 <4> [437.542265] rpm_callback+0x54/0x60 <4> [437.542268] rpm_suspend.part.10+0x105/0x5a0 <4> [437.542270] pm_runtime_work+0x7d/0x1e0 <4> [437.542273] process_one_work+0x272/0x5c0 <4> [437.542276] worker_thread+0x37/0x370 <4> [437.542278] kthread+0xed/0x120 <4> [437.542280] ret_from_fork+0x1f/0x30 <4> [437.542282] -> #0 (acpi_wakeup_lock){+.+.}-{3:3}: <4> [437.542285] __lock_acquire+0x15ad/0x2940 <4> [437.542288] lock_acquire+0xd3/0x310 <4> [437.542291] __mutex_lock+0x97/0xf20 <4> [437.542293] acpi_device_wakeup_disable+0x12/0x50 <4> [437.542295] acpi_pm_set_device_wakeup+0x6e/0x100 <4> [437.542297] __pci_enable_wake+0x73/0xa0 <4> [437.542300] pci_pm_runtime_resume+0x45/0x90 <4> [437.542302] __rpm_callback+0x3d/0x110 <4> [437.542304] rpm_callback+0x54/0x60 <4> [437.542307] rpm_resume+0x54f/0x750 <4> [437.542309] __pm_runtime_resume+0x42/0x80 <4> [437.542311] __intel_runtime_pm_get+0x19/0x80 [i915] <4> [437.542386] i915_gem_object_unbind+0x8f/0x3b0 [i915] <4> [437.542487] i915_gem_shrink+0x634/0x850 [i915] <4> [437.542584] i915_gem_shrinker_scan+0x3a/0xc0 [i915] <4> [437.542679] shrink_slab.constprop.97+0x1a4/0x4f0 <4> [437.542684] shrink_node+0x21e/0x420 <4> [437.542687] balance_pgdat+0x241/0x5c0 <4> [437.542690] kswapd+0x229/0x4f0 <4> [437.542694] kthread+0xed/0x120 <4> [437.542697] ret_from_fork+0x1f/0x30 <4> [437.542701] other info that might help us debug this: <4> [437.542705] Chain exists of: acpi_wakeup_lock --> acpi_device_lock --> fs_reclaim <4> [437.542713] Possible unsafe locking scenario: <4> [437.542716] CPU0 CPU1 <4> [437.542719] ---- ---- <4> [437.542721] lock(fs_reclaim); <4> [437.542725] lock(acpi_device_lock); <4> [437.542728] lock(fs_reclaim); <4> [437.542732] lock(acpi_wakeup_lock); <4> [437.542736] *** DEADLOCK *** Bug: https://gitlab.freedesktop.org/drm/intel/-/issues/6449 Fixes: 3e817471a34c ("drm/i915/gem: Take runtime-pm wakeref prior to unbinding") Signed-off-by: Chris Wilson Cc: Matthew Auld Cc: # v5.6+ Signed-off-by: Janusz Krzysztofik Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/i915_gem.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 702e5b89be22..910a6fde5726 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -119,8 +119,8 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj, { struct intel_runtime_pm *rpm = &to_i915(obj->base.dev)->runtime_pm; bool vm_trylock = !!(flags & I915_GEM_OBJECT_UNBIND_VM_TRYLOCK); + intel_wakeref_t wakeref = 0; LIST_HEAD(still_in_list); - intel_wakeref_t wakeref; struct i915_vma *vma; int ret; @@ -135,7 +135,8 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj, * as they are required by the shrinker. Ergo, we wake the device up * first just in case. */ - wakeref = intel_runtime_pm_get(rpm); + if (!(flags & I915_GEM_OBJECT_UNBIND_TEST)) + wakeref = intel_runtime_pm_get(rpm); try_again: ret = 0; @@ -200,7 +201,8 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj, goto try_again; } - intel_runtime_pm_put(rpm, wakeref); + if (wakeref) + intel_runtime_pm_put(rpm, wakeref); return ret; } From patchwork Wed Jul 20 10:16:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Janusz Krzysztofik X-Patchwork-Id: 12923769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F0271C43334 for ; Wed, 20 Jul 2022 10:17:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7149111AA87; Wed, 20 Jul 2022 10:17:24 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4468611A9AA; Wed, 20 Jul 2022 10:17:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658312242; x=1689848242; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sKTg/cxmjQFFA2PT+4EoY9b2fRVQDMlbYHkI/BKCTMI=; b=eUhCnaOwhTVgAOGhClkXLvxp8lIhTqfqHmfG1v5I7przilkOmwGxIf4H MmM6L1NH/AGZsImN11xsi0ljFoXgGHG8szZuvSLreJ2qcffwVeeswoiM/ Ek7aC1gULe4wN7U3BiXqe2BYmrNhYNDqVxiGc4lGEZOAiirqGdYGpNsCb nywG1OT1BJ6ZQJ6tcGP9gzw/ZRpJuf0lUOnTkKMPT4Brw+bO5UKwGXcTB 1+K+UGOXXsUWCjXKt1/riAN9PUiGF3+IkfTriLVse60T+5ZYZqI2DIJLr u/kfsuqXlmh9dEUQKBlHTFPDw6s6Kw6wbcnwggpVIlY2SeP3wQQEG0Uwa w==; X-IronPort-AV: E=McAfee;i="6400,9594,10413"; a="269764455" X-IronPort-AV: E=Sophos;i="5.92,286,1650956400"; d="scan'208";a="269764455" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2022 03:17:21 -0700 X-IronPort-AV: E=Sophos;i="5.92,286,1650956400"; d="scan'208";a="625604506" Received: from jkrzyszt-mobl1.ger.corp.intel.com ([10.213.21.190]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2022 03:17:18 -0700 From: Janusz Krzysztofik To: intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 2/2] drm/i915/gem: Perform active shrinking from a background thread Date: Wed, 20 Jul 2022 12:16:52 +0200 Message-Id: <20220720101652.93293-2-janusz.krzysztofik@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220720101652.93293-1-janusz.krzysztofik@linux.intel.com> References: <20220720101652.93293-1-janusz.krzysztofik@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Tvrtko Ursulin , dri-devel@lists.freedesktop.org, Chris Wilson , Matthew Auld , Janusz Krzysztofik Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson i915 is very greedy and will retain system pages for as long as the user requires them; once acquired they will be only returned when the object is freed. In order to respond to system memory pressure, i915 hooks into the shrinker subsystem, designed to prune the filesystem caches, to unbind and return system pages. However, we can only do so if the device is active at that moment, as we cannot resume the device from inside direct reclaim to unbind pages from the GPU, nor do we want to delay random processes with unbound waits trying to reclaim active pages. To workaround that quandary, what we avoided in direct reclaim we delegated to kswapd, as that is run from process context outside of direct reclaim and able to sleep and resume the device. In practice, kswapd also uses fs_reclaim_acquire() around its shrink_slab calls, prohibiting runtime resume. If we cannot wake the device from idle, we will retain system memory indefinitely. As we cannot take advantage of kswapd's decoupled process context to perform an active reclaim of bound pages, spawn our own kthread to wait under our wakeref. Similar to kswapd, there is no direct dependency on the background task to direct reclaim (other than failure to promptly return pages will implicitly result in oom), as such the task itself does not inherit the fs-reclaim context. A page reclaimed by i915 will typically not immediately be available for re-use, as it will require writeback, and so only a future allocation attempt may benefit. Concurrent page allocation attempts do not wait for either kswapd or our own swapper task. We mark our kthread as a memallocator (allowed to dip into memory reserves, but not allowed to trigger direct reclaim) and mark up the call to the shrinker with a fs_reclaim critical section. This should prevent us from accidentally abusing the background swapper task, and so the swapper kthread behaves like kswapd with the exception of being allowed to wake the device up, and being decoupled from the shrinker_rwsem. Reported-by: Thomas Hellström Bug: https://gitlab.freedesktop.org/drm/intel/-/issues/6449 Fixes: 178a30c90ac7 ("drm/i915: Unbind objects in shrinker only if device is runtime active") Signed-off-by: Chris Wilson Cc: Thomas Hellström Cc: Matthew Auld Cc: Tvrtko Ursulin Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: Janusz Krzysztofik --- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 134 +++++++++++++++++-- drivers/gpu/drm/i915/i915_drv.h | 15 +++ 2 files changed, 135 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index 1030053571a2..bc6c1978e64a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -310,6 +310,113 @@ i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return count; } +static unsigned long run_swapper(struct drm_i915_private *i915, + unsigned long target, + unsigned long *nr_scanned) +{ + return i915_gem_shrink(NULL, i915, + target, nr_scanned, + I915_SHRINK_ACTIVE | + I915_SHRINK_BOUND | + I915_SHRINK_UNBOUND | + I915_SHRINK_WRITEBACK); +} + +static int swapper(void *arg) +{ + struct drm_i915_private *i915 = arg; + atomic_long_t *target = &i915->mm.swapper.target; + unsigned int noreclaim_state; + + /* + * For us to be running the swapper implies that the system is under + * enough memory pressure to be swapping. At that point, we both want + * to ensure we make forward progress in order to reclaim pages from + * the device and not contribute further to direct reclaim pressure. We + * mark ourselves as a memalloc task in order to not trigger direct + * reclaim ourselves, but dip into the system memory reserves for + * shrinkers. + */ + noreclaim_state = memalloc_noreclaim_save(); + + do { + intel_wakeref_t wakeref; + + ___wait_var_event(target, + atomic_long_read(target) || + kthread_should_stop(), + TASK_IDLE, 0, 0, schedule()); + if (kthread_should_stop()) + break; + + with_intel_runtime_pm(&i915->runtime_pm, wakeref) { + unsigned long nr_scan = atomic_long_xchg(target, 0); + + /* + * Now that we have woken up the device hierarchy, + * act as a normal shrinker. Our shrinker is primarily + * focussed on supporting direct reclaim (low latency, + * avoiding contention that may lead to more reclaim, + * or prevent that reclaim from making forward progress) + * and we wish to continue that good practice even + * here where we could accidentally sleep holding locks. + * + * Let lockdep know and warn us about any bad practice + * that may lead to high latency in direct reclaim, or + * anywhere else. + * + * While the swapper is active, direct reclaim from + * other threads will also be running in parallel + * through i915_gem_shrink(), scouring for idle pages. + */ + fs_reclaim_acquire(GFP_KERNEL); + run_swapper(i915, nr_scan, &nr_scan); + fs_reclaim_release(GFP_KERNEL); + } + } while (1); + + memalloc_noreclaim_restore(noreclaim_state); + return 0; +} + +static void start_swapper(struct drm_i915_private *i915) +{ + i915->mm.swapper.tsk = kthread_run(swapper, i915, "i915-swapd"); + if (IS_ERR(i915->mm.swapper.tsk)) + drm_err(&i915->drm, + "Failed to launch swapper; memory reclaim may be degraded\n"); +} + +static unsigned long kick_swapper(struct drm_i915_private *i915, + unsigned long nr_scan, + unsigned long *scanned) +{ + /* Run immediately under kswap if disabled */ + if (IS_ERR_OR_NULL(i915->mm.swapper.tsk)) + /* + * Note that as we are still inside kswapd, we are still + * inside a fs_reclaim context and cannot forcibly wake the + * device and so can only opportunitiscally reclaim bound + * memory. + */ + return run_swapper(i915, nr_scan, scanned); + + if (!atomic_long_fetch_add(nr_scan, &i915->mm.swapper.target)) + wake_up_var(&i915->mm.swapper.target); + + return 0; +} + +static void stop_swapper(struct drm_i915_private *i915) +{ + struct task_struct *tsk = fetch_and_zero(&i915->mm.swapper.tsk); + + if (IS_ERR_OR_NULL(tsk)) + return; + + kthread_stop(tsk); +} + static unsigned long i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { @@ -318,27 +425,22 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) unsigned long freed; sc->nr_scanned = 0; - freed = i915_gem_shrink(NULL, i915, sc->nr_to_scan, &sc->nr_scanned, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND); - if (sc->nr_scanned < sc->nr_to_scan && current_is_kswapd()) { - intel_wakeref_t wakeref; + if (!sc->nr_scanned) /* nothing left to reclaim */ + return SHRINK_STOP; - with_intel_runtime_pm(&i915->runtime_pm, wakeref) { - freed += i915_gem_shrink(NULL, i915, - sc->nr_to_scan - sc->nr_scanned, - &sc->nr_scanned, - I915_SHRINK_ACTIVE | - I915_SHRINK_BOUND | - I915_SHRINK_UNBOUND | - I915_SHRINK_WRITEBACK); - } - } + /* Pages still bound and system is failing with direct reclaim? */ + if (sc->nr_scanned < sc->nr_to_scan && current_is_kswapd()) + /* Defer high latency tasks to a background thread. */ + freed += kick_swapper(i915, + sc->nr_to_scan - sc->nr_scanned, + &sc->nr_scanned); - return sc->nr_scanned ? freed : SHRINK_STOP; + return freed; } static int @@ -434,10 +536,14 @@ void i915_gem_driver_register__shrinker(struct drm_i915_private *i915) i915->mm.vmap_notifier.notifier_call = i915_gem_shrinker_vmap; drm_WARN_ON(&i915->drm, register_vmap_purge_notifier(&i915->mm.vmap_notifier)); + + start_swapper(i915); } void i915_gem_driver_unregister__shrinker(struct drm_i915_private *i915) { + stop_swapper(i915); + drm_WARN_ON(&i915->drm, unregister_vmap_purge_notifier(&i915->mm.vmap_notifier)); drm_WARN_ON(&i915->drm, diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 3364a6e5169b..976983ab67a9 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -283,6 +283,21 @@ struct i915_gem_mm { /* shrinker accounting, also useful for userland debugging */ u64 shrink_memory; u32 shrink_count; + + /* background task for returning bound system pages */ + struct { + struct task_struct *tsk; /* our kswapd equivalent */ + + /* + * Track the number of pages do_shrink_slab() has asked us + * to reclaim, and we have failed to find. This count of + * outstanding reclaims is passed to the swapper thread, + * which then blocks as it tries to achieve that goal. + * It is likely that the target overshoots due to the + * latency between our thread and kswapd making new requests. + */ + atomic_long_t target; + } swapper; }; #define I915_IDLE_ENGINES_TIMEOUT (200) /* in ms */