From patchwork Mon Dec 17 09:04:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuo-Hsin Yang X-Patchwork-Id: 10732969 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EF6B1399 for ; Mon, 17 Dec 2018 09:04:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F2D82999D for ; Mon, 17 Dec 2018 09:04:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6303729D72; Mon, 17 Dec 2018 09:04:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3FB1C2999D for ; Mon, 17 Dec 2018 09:04:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1E7926E21D; Mon, 17 Dec 2018 09:04:17 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by gabe.freedesktop.org (Postfix) with ESMTPS id 94A866E21D for ; Mon, 17 Dec 2018 09:04:15 +0000 (UTC) Received: by mail-wm1-x343.google.com with SMTP id s14so11516993wmh.1 for ; Mon, 17 Dec 2018 01:04:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+9M264A6A/qCokQu0xrI35Us1OHxu4ihdy1JaYhjRGo=; b=Jbj3mWEkkOVnpnCVPfA1QKX4Q8G4n5yOH25MER3rSAKjYW/23Z55lMnNWnCaM3OevI k4GZn+TrgJz8xakzMP2A0Yg/IiExvDRh7+XvJQUu4tJ0H7nnrBKEqaobgzMLQttNicx7 IazHmMvtmh6crh1kTGzraWZbj5qvFL0/GxWBP0kELd/o8y3WEDP19RzwIHnQaCRJhDho HzrXdyPzegfGK0Yv59ffJIV8p2AQ9iO1w89oEPprda7UZCkVcHENKm9RpBT76fV5rS8y iJbOtxkY5sez9Zyt+5aBsi3w53HSCTu0ZVYLH/2H91ko8lD7stWEJHUq6hGX7wWaSFrd pXuA== X-Gm-Message-State: AA+aEWad7iWuuBXO9Ojlgot2tmPXyK4C3Ka/1GNHdhbmBaqbwupgFj6Q oEoEZaf8endpKkP1Hi6mvU3IVQfEDQ== X-Google-Smtp-Source: AFSGD/Wd8ivSCeUjBbUBxSohIdMieKGBdVE0E4pnxYljEhqswZk/2CMevHD+ZvnVxi5e8GISWoYPaQ== X-Received: by 2002:a1c:b456:: with SMTP id d83mr11278310wmf.115.1545037453680; Mon, 17 Dec 2018 01:04:13 -0800 (PST) Received: from vovoy-z840.tpe.corp.google.com ([2401:fa00:1:b:d89e:cfa6:3c8:e61b]) by smtp.gmail.com with ESMTPSA id w125sm10457678wmb.45.2018.12.17.01.04.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 17 Dec 2018 01:04:12 -0800 (PST) From: Kuo-Hsin Yang To: dri-devel@lists.freedesktop.org Subject: [PATCH v2] drm/gem: Mark pinned pages as unevictable Date: Mon, 17 Dec 2018 17:04:01 +0800 Message-Id: <20181217090401.106078-1-vovoy@chromium.org> X-Mailer: git-send-email 2.20.0.405.gbc1bbc6f85-goog In-Reply-To: <20181214084135.48865-1-vovoy@chromium.org> References: <20181214084135.48865-1-vovoy@chromium.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kuo-Hsin Yang Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP The gem drivers use shmemfs to allocate backing storage for gem objects. On Samsung Chromebook Plus, the drm/rockchip driver may call rockchip_gem_get_pages -> drm_gem_get_pages -> shmem_read_mapping_page to pin a lot of pages, breaking the page reclaim mechanism and causing oom-killer invocation. E.g. when the size of a zone is 3.9 GiB, the inactive_ratio is 5. If active_anon / inactive_anon < 5 and all pages in the inactive_anon lru are pinned, page reclaim would keep scanning inactive_anon lru without reclaiming memory. It breaks page reclaim when the rockchip driver only pins about 1/6 of the anon lru pages. Mark these pinned pages as unevictable to avoid the premature oom-killer invocation. See also similar patch on i915 driver [1]. [1]: https://patchwork.freedesktop.org/patch/msgid/20181106132324.17390-1-chris@chris-wilson.co.uk Signed-off-by: Kuo-Hsin Yang Cc: Daniel Vetter Cc: Chris Wilson Reviewed-by: Chris Wilson --- drivers/gpu/drm/drm_gem.c | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 8b55ece97967f..2896ff60552f5 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -526,6 +527,17 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_create_mmap_offset); +/* + * Move pages to appropriate lru and release the pagevec, decrementing the + * ref count of those pages. + */ +static void drm_gem_check_release_pagevec(struct pagevec *pvec) +{ + check_move_unevictable_pages(pvec); + __pagevec_release(pvec); + cond_resched(); +} + /** * drm_gem_get_pages - helper to allocate backing pages for a GEM object * from shmem @@ -551,6 +563,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) { struct address_space *mapping; struct page *p, **pages; + struct pagevec pvec; int i, npages; /* This is the shared memory object that backs the GEM resource */ @@ -568,6 +581,8 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) if (pages == NULL) return ERR_PTR(-ENOMEM); + mapping_set_unevictable(mapping); + for (i = 0; i < npages; i++) { p = shmem_read_mapping_page(mapping, i); if (IS_ERR(p)) @@ -586,8 +601,14 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) return pages; fail: - while (i--) - put_page(pages[i]); + mapping_clear_unevictable(mapping); + pagevec_init(&pvec); + while (i--) { + if (!pagevec_add(&pvec, pages[i])) + drm_gem_check_release_pagevec(&pvec); + } + if (pagevec_count(&pvec)) + drm_gem_check_release_pagevec(&pvec); kvfree(pages); return ERR_CAST(p); @@ -605,6 +626,11 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed) { int i, npages; + struct address_space *mapping; + struct pagevec pvec; + + mapping = file_inode(obj->filp)->i_mapping; + mapping_clear_unevictable(mapping); /* We already BUG_ON() for non-page-aligned sizes in * drm_gem_object_init(), so we should never hit this unless @@ -614,6 +640,7 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, npages = obj->size >> PAGE_SHIFT; + pagevec_init(&pvec); for (i = 0; i < npages; i++) { if (dirty) set_page_dirty(pages[i]); @@ -622,8 +649,11 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, mark_page_accessed(pages[i]); /* Undo the reference we took when populating the table */ - put_page(pages[i]); + if (!pagevec_add(&pvec, pages[i])) + drm_gem_check_release_pagevec(&pvec); } + if (pagevec_count(&pvec)) + drm_gem_check_release_pagevec(&pvec); kvfree(pages); }