From patchwork Tue Jun 6 12:04:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 9768741 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D125A6034B for ; Tue, 6 Jun 2017 12:05:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C3EF927DCD for ; Tue, 6 Jun 2017 12:05:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B5BE2283BA; Tue, 6 Jun 2017 12:05:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4C9C427DCD for ; Tue, 6 Jun 2017 12:05:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8C7166E0F0; Tue, 6 Jun 2017 12:05:47 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 251A86E0F0 for ; Tue, 6 Jun 2017 12:05:46 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 7141265-1500050 for multiple; Tue, 06 Jun 2017 13:04:36 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Tue, 06 Jun 2017 13:04:37 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org, linux-mm@kvack.org Date: Tue, 6 Jun 2017 13:04:36 +0100 Message-Id: <20170606120436.8683-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.11.0 X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 Cc: Michal Hocko , Dave Hansen , Matthew Auld , Andrew Morton , "Kirill A . Shutemov" Subject: [Intel-gfx] [RFC] mm, drm/i915: Mark pinned shmemfs pages as unevictable X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Similar in principle to the treatment of get_user_pages, pages that i915.ko acquires from shmemfs are not immediately reclaimable and so should be excluded from the mm accounting and vmscan until they have been returned to the system via shrink_slab/i915_gem_shrink. By moving the unreclaimable pages off the inactive anon lru, not only should vmscan be improved by avoiding walking unreclaimable pages, but the system should also have a better idea of how much memory it can reclaim at that moment in time. Note, however, the interaction with shrink_slab which will move some mlocked pages back to the inactive anon lru. Suggested-by: Dave Hansen Signed-off-by: Chris Wilson Cc: Joonas Lahtinen Cc: Matthew Auld Cc: Dave Hansen Cc: "Kirill A . Shutemov" Cc: Andrew Morton Cc: Michal Hocko --- drivers/gpu/drm/i915/i915_gem.c | 17 ++++++++++++++++- mm/mlock.c | 2 ++ 2 files changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 8cb811519db1..37a98fbc6a12 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2193,6 +2193,9 @@ void __i915_gem_object_truncate(struct drm_i915_gem_object *obj) obj->mm.pages = ERR_PTR(-EFAULT); } +extern void mlock_vma_page(struct page *page); +extern unsigned int munlock_vma_page(struct page *page); + static void i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, struct sg_table *pages) @@ -2214,6 +2217,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, if (obj->mm.madv == I915_MADV_WILLNEED) mark_page_accessed(page); + lock_page(page); + munlock_vma_page(page); + unlock_page(page); + put_page(page); } obj->mm.dirty = false; @@ -2412,6 +2419,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) } last_pfn = page_to_pfn(page); + lock_page(page); + mlock_vma_page(page); + unlock_page(page); + /* Check that the i965g/gm workaround works. */ WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL)); } @@ -2450,8 +2461,12 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) err_sg: sg_mark_end(sg); err_pages: - for_each_sgt_page(page, sgt_iter, st) + for_each_sgt_page(page, sgt_iter, st) { + lock_page(page); + munlock_vma_page(page); + unlock_page(page); put_page(page); + } sg_free_table(st); kfree(st); diff --git a/mm/mlock.c b/mm/mlock.c index b562b5523a65..531d9f8fd033 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -94,6 +94,7 @@ void mlock_vma_page(struct page *page) putback_lru_page(page); } } +EXPORT_SYMBOL_GPL(mlock_vma_page); /* * Isolate a page from LRU with optional get_page() pin. @@ -211,6 +212,7 @@ unsigned int munlock_vma_page(struct page *page) out: return nr_pages - 1; } +EXPORT_SYMBOL_GPL(munlock_vma_page); /* * convert get_user_pages() return value to posix mlock() error