From patchwork Mon Nov 23 09:20:24 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 7678261 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id F3CE1BF90C for ; Mon, 23 Nov 2015 09:20:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1CD2E205DD for ; Mon, 23 Nov 2015 09:20:55 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id DA36F2044C for ; Mon, 23 Nov 2015 09:20:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CBFB18975F; Mon, 23 Nov 2015 01:20:52 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [87.106.93.118]) by gabe.freedesktop.org (Postfix) with ESMTP id C21238975F for ; Mon, 23 Nov 2015 01:20:51 -0800 (PST) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 48657072-1500048 for multiple; Mon, 23 Nov 2015 09:20:32 +0000 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Mon, 23 Nov 2015 09:20:25 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 23 Nov 2015 09:20:24 +0000 Message-Id: <1448270424-16612-1-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.6.2 X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Cc: Akash Goel , sourab.gupta@intel.com Subject: [Intel-gfx] [PATCH] drm/i915: Disable shrinker for non-swapped backed objects X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If the system has no available swap pages, we cannot make forward progress in the shrinker by releasing active pages, only by releasing purgeable pages which are immediately reaped. Take total_swap_pages into account when counting up available objects to be shrunk and subsequently shrinking them. By doing so, we avoid unbinding objects that cannot be shrunk and so wasting CPU cycles flushing those objects from the GPU to the system and then immediately back again (as they will more than likely be reused shortly after). Based on a patch by Akash Goel. Reported-by: Akash Goel Signed-off-by: Chris Wilson Cc: Akash Goel Cc: sourab.gupta@intel.com --- drivers/gpu/drm/i915/i915_gem_shrinker.c | 55 ++++++++++++++++++++++---------- 1 file changed, 39 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c index f7df54a8ee2b..0823f321b7de 100644 --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c @@ -47,6 +47,41 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task) #endif } +static int num_vma_bound(struct drm_i915_gem_object *obj) +{ + struct i915_vma *vma; + int count = 0; + + list_for_each_entry(vma, &obj->vma_list, vma_link) { + if (drm_mm_node_allocated(&vma->node)) + count++; + if (vma->pin_count) + count++; + } + + return count; +} + +static bool can_release_pages(struct drm_i915_gem_object *obj) +{ + /* Only report true if by unbinding the object and putting its pages + * we can actually make forward progress towards freeing physical + * pages. + * + * If the pages are pinned for any other reason than being bound + * to the GPU, simply unbinding from the GPU is not going to succeed + * in release our pin count on the pages themselves. + */ + if (obj->pages_pin_count != num_vma_bound(obj)) + return false; + + /* We can only return physical pages if we either discard them + * (because the user has marked them as being purgeable) or if + * we can move their contents out to swap. + */ + return total_swap_pages || obj->madv == I915_MADV_DONTNEED; +} + /** * i915_gem_shrink - Shrink buffer object caches * @dev_priv: i915 device @@ -129,6 +164,9 @@ i915_gem_shrink(struct drm_i915_private *dev_priv, if ((flags & I915_SHRINK_ACTIVE) == 0 && obj->active) continue; + if (!can_release_pages(obj)) + continue; + drm_gem_object_reference(&obj->base); /* For the unbound phase, this should be a no-op! */ @@ -188,21 +226,6 @@ static bool i915_gem_shrinker_lock(struct drm_device *dev, bool *unlock) return true; } -static int num_vma_bound(struct drm_i915_gem_object *obj) -{ - struct i915_vma *vma; - int count = 0; - - list_for_each_entry(vma, &obj->vma_list, vma_link) { - if (drm_mm_node_allocated(&vma->node)) - count++; - if (vma->pin_count) - count++; - } - - return count; -} - static unsigned long i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { @@ -222,7 +245,7 @@ i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) count += obj->base.size >> PAGE_SHIFT; list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) { - if (!obj->active && obj->pages_pin_count == num_vma_bound(obj)) + if (!obj->active && can_release_pages(obj)) count += obj->base.size >> PAGE_SHIFT; }