From patchwork Wed Aug 19 04:27:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 11722501 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48701138C for ; Wed, 19 Aug 2020 04:27:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09FB820772 for ; Wed, 19 Aug 2020 04:27:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uTztFXSy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09FB820772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 365E38D0009; Wed, 19 Aug 2020 00:27:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2EE916B0036; Wed, 19 Aug 2020 00:27:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B4108D0009; Wed, 19 Aug 2020 00:27:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id F25E16B0033 for ; Wed, 19 Aug 2020 00:27:34 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A6F531F0A for ; Wed, 19 Aug 2020 04:27:34 +0000 (UTC) X-FDA: 77166034428.21.nut02_2d0888a27025 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 728AF180442C0 for ; Wed, 19 Aug 2020 04:27:34 +0000 (UTC) X-Spam-Summary: 1,0,0,f1c4460db4a64058,d41d8cd98f00b204,alexander.duyck@gmail.com,,RULES_HIT:2:41:69:152:355:379:421:960:966:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1535:1593:1594:1605:1606:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4119:4250:4321:4385:5007:6119:6120:6261:6653:6742:7576:7875:7903:8531:8957:9010:9413:9592:9707:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12760:13161:13229:14096:14097:14687:21080:21444:21451:21627:21666:21740:21796:21987:21990:30034:30036:30054:30070:30091,0,RBL:209.85.222.194:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yfkgsz7rm4uywwt4zc3gudm1pfxoco3oyuzomeg8o6amjn7uu61ekxfsq6us1.pa93cwrhaa6hrrnudd1pmbtuzwxbbzwgbbd1nkz19phymq9izh4961ix8k6qutg.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MS F:not bu X-HE-Tag: nut02_2d0888a27025 X-Filterd-Recvd-Size: 8974 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Aug 2020 04:27:33 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id n129so20516414qkd.6 for ; Tue, 18 Aug 2020 21:27:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=5RYocxNjH/JuNLjY5mlxGXA5aUyyMf0iKZRCbtOnwVc=; b=uTztFXSyFhR+OqlR3qfbXh972QyhBGIQS57dnpJlCe8O8USD/B0vecgu2PDPSYgKLQ FH7u9ZEn/yjCqyfcU9tMUiZcj6IkFFDIauBIO7M3DLoyzlkO4UXhA/d89sksieOAIfUp OvPZZ/6AcVwrB6TF20v/UNVjUQyiGVpJNIVrSh7uxtzJUfn4Axb7rsU3cnSlphprgL86 3mHT3A7y41n9L2sPVhTkqxGdOYHHqYUZfAzECqj2uDykgGUC0HHiWRWoQyG4fdBMezQp vPZxkPPpBxkPwUyrxX59CVv2keRH9keUcRf/hmOc+e4Kixb1yZdEgkaQ0fjWLae+b99P Xslg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=5RYocxNjH/JuNLjY5mlxGXA5aUyyMf0iKZRCbtOnwVc=; b=FEKLVF+dDQ5vWbaVd0dZ3e8k0mDVYINyjIPTIBluvMBCjHhj6sBXU4ChML0i9XRTMc 9Z7sXVh8DIQO/hYBQslhufkHnHjhIq342WhGcGRKfVmSEbMVPOgXHKHVyL8Ozr2ostFV sRFLJCroWLSXGjV+9K3t/Kq1thxXIgKc+fqPB7bR6augKBnRmV0TAgGZfTi629VgVpt2 +G//KKa3ud4J6ofBMgdPM3nqAmETclkpAwLKtH4a3AW8riJWgD5rbBbaKCgFGN6dMrvA 3JLYPcV5X/ZauCge+R8Lw7ycbOAkDv/hTODAujgZea75Xfu6igZcO5iYmMGa90KkfVBu sayA== X-Gm-Message-State: AOAM532OlC7aM9GCxnjBtQy1+cOJEUUSrZPchGwryk2r7HltZfr+5cJx PnOqXvko/dii0pXQ6lVxbu4= X-Google-Smtp-Source: ABdhPJxUUKPy8cTPbdHxNKJ42tPwCQ6Lf76LRyVq5ZadiUVAip+EDc39Tjp9mkEub1FXGbbEuTk89g== X-Received: by 2002:a05:620a:1429:: with SMTP id k9mr19720876qkj.273.1597811253360; Tue, 18 Aug 2020 21:27:33 -0700 (PDT) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id c33sm26752405qtk.40.2020.08.18.21.27.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Aug 2020 21:27:32 -0700 (PDT) Subject: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes From: Alexander Duyck To: alex.shi@linux.alibaba.com Cc: yang.shi@linux.alibaba.com, lkp@intel.com, rong.a.chen@intel.com, khlebnikov@yandex-team.ru, kirill@shutemov.name, hughd@google.com, linux-kernel@vger.kernel.org, alexander.duyck@gmail.com, daniel.m.jordan@oracle.com, linux-mm@kvack.org, shakeelb@google.com, willy@infradead.org, hannes@cmpxchg.org, tj@kernel.org, cgroups@vger.kernel.org, akpm@linux-foundation.org, richard.weiyang@gmail.com, mgorman@techsingularity.net, iamjoonsoo.kim@lge.com Date: Tue, 18 Aug 2020 21:27:30 -0700 Message-ID: <20200819042730.23414.41309.stgit@localhost.localdomain> In-Reply-To: <20200819041852.23414.95939.stgit@localhost.localdomain> References: <20200819041852.23414.95939.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Rspamd-Queue-Id: 728AF180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Duyck The release_pages function has a number of paths that end up with the LRU lock having to be released and reacquired. Such an example would be the freeing of THP pages as it requires releasing the LRU lock so that it can be potentially reacquired by __put_compound_page. In order to avoid that we can split the work into 3 passes, the first without the LRU lock to go through and sort out those pages that are not in the LRU so they can be freed immediately from those that can't. The second pass will then go through removing those pages from the LRU in batches as large as a pagevec can hold before freeing the LRU lock. Once the pages have been removed from the LRU we can then proceed to free the remaining pages without needing to worry about if they are in the LRU any further. The general idea is to avoid bouncing the LRU lock between pages and to hopefully aggregate the lock for up to the full page vector worth of pages. Signed-off-by: Alexander Duyck --- mm/swap.c | 109 +++++++++++++++++++++++++++++++++++++------------------------ 1 file changed, 67 insertions(+), 42 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index fe53449fa1b8..b405f81b2c60 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -795,6 +795,54 @@ void lru_add_drain_all(void) } #endif +static void __release_page(struct page *page, struct list_head *pages_to_free) +{ + if (PageCompound(page)) { + __put_compound_page(page); + } else { + /* Clear Active bit in case of parallel mark_page_accessed */ + __ClearPageActive(page); + __ClearPageWaiters(page); + + list_add(&page->lru, pages_to_free); + } +} + +static void __release_lru_pages(struct pagevec *pvec, + struct list_head *pages_to_free) +{ + struct lruvec *lruvec = NULL; + unsigned long flags = 0; + int i; + + /* + * The pagevec at this point should contain a set of pages with + * their reference count at 0 and the LRU flag set. We will now + * need to pull the pages from their LRU lists. + * + * We walk the list backwards here since that way we are starting at + * the pages that should be warmest in the cache. + */ + for (i = pagevec_count(pvec); i--;) { + struct page *page = pvec->pages[i]; + + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); + VM_BUG_ON_PAGE(!PageLRU(page), page); + __ClearPageLRU(page); + del_page_from_lru_list(page, lruvec, page_off_lru(page)); + } + + unlock_page_lruvec_irqrestore(lruvec, flags); + + /* + * A batch of pages are no longer on the LRU list. Go through and + * start the final process of returning the deferred pages to their + * appropriate freelists. + */ + for (i = pagevec_count(pvec); i--;) + __release_page(pvec->pages[i], pages_to_free); +} + /** * release_pages - batched put_page() * @pages: array of pages to release @@ -806,32 +854,24 @@ void lru_add_drain_all(void) void release_pages(struct page **pages, int nr) { int i; + struct pagevec pvec; LIST_HEAD(pages_to_free); - struct lruvec *lruvec = NULL; - unsigned long flags; - unsigned int lock_batch; + pagevec_init(&pvec); + + /* + * We need to first walk through the list cleaning up the low hanging + * fruit and clearing those pages that either cannot be freed or that + * are non-LRU. We will store the LRU pages in a pagevec so that we + * can get to them in the next pass. + */ for (i = 0; i < nr; i++) { struct page *page = pages[i]; - /* - * Make sure the IRQ-safe lock-holding time does not get - * excessive with a continuous string of pages from the - * same lruvec. The lock is held only if lruvec != NULL. - */ - if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - if (is_huge_zero_page(page)) continue; if (is_zone_device_page(page)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } /* * ZONE_DEVICE pages that return 'false' from * put_devmap_managed_page() do not require special @@ -848,36 +888,21 @@ void release_pages(struct page **pages, int nr) if (!put_page_testzero(page)) continue; - if (PageCompound(page)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - __put_compound_page(page); + if (!PageLRU(page)) { + __release_page(page, &pages_to_free); continue; } - if (PageLRU(page)) { - struct lruvec *prev_lruvec = lruvec; - - lruvec = relock_page_lruvec_irqsave(page, lruvec, - &flags); - if (prev_lruvec != lruvec) - lock_batch = 0; - - VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + /* record page so we can get it in the next pass */ + if (!pagevec_add(&pvec, page)) { + __release_lru_pages(&pvec, &pages_to_free); + pagevec_reinit(&pvec); } - - /* Clear Active bit in case of parallel mark_page_accessed */ - __ClearPageActive(page); - __ClearPageWaiters(page); - - list_add(&page->lru, &pages_to_free); } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + + /* flush any remaining LRU pages that need to be processed */ + if (pagevec_count(&pvec)) + __release_lru_pages(&pvec, &pages_to_free); mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free);