From patchwork Tue Jul 7 11:46:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11648449 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0FF8F92A for ; Tue, 7 Jul 2020 11:47:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D180F206F6 for ; Tue, 7 Jul 2020 11:47:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D180F206F6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 613218D0015; Tue, 7 Jul 2020 07:47:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4F4128D0013; Tue, 7 Jul 2020 07:47:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 399FA8D0015; Tue, 7 Jul 2020 07:47:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 1AEFE8D0013 for ; Tue, 7 Jul 2020 07:47:28 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D51061EE6 for ; Tue, 7 Jul 2020 11:47:27 +0000 (UTC) X-FDA: 77011104534.10.look75_5207d0226eb4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id A5307169F9A for ; Tue, 7 Jul 2020 11:47:27 +0000 (UTC) X-Spam-Summary: 50,0,0,aa7f7227aad8a6e8,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:2:41:69:355:379:541:800:960:967:968:973:988:989:1260:1261:1345:1359:1431:1437:1535:1605:1606:1730:1747:1777:1792:2393:2525:2553:2560:2563:2682:2685:2693:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3867:3868:3870:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4117:4250:4321:4605:5007:6261:6737:7514:7903:8603:8957:9010:9025:9592:10004:11026:11232:11473:11638:11639:11658:11914:12043:12048:12296:12297:12438:12555:12895:13161:13229:13845:13846:14096:14394:14915:21060:21080:21451:21627:21740:21788:21795:21809:21987:21990:30051:30054:30064:30070:30090,0,RBL:115.124.30.132:@linux.alibaba.com:.lbl8.mailshell.net-64.201.201.201 62.20.2.100;04y8uzp6d4wq58faczcjrgxkkrfk4yptnfjgjj1zdjrcxdkhbcs4yshrz5554x9.wx3jx5q6okeacxew1sznga8rfuts7z6ceymogj1mjhidpc5tmmtbrpigxongdco.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck: none,Dom X-HE-Tag: look75_5207d0226eb4 X-Filterd-Recvd-Size: 6915 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 11:47:26 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R841e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01355;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0U20joWQ_1594122436; Received: from alexshi-test.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U20joWQ_1594122436) by smtp.aliyun-inc.com(127.0.0.1); Tue, 07 Jul 2020 19:47:22 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com Cc: Alex Shi , Michal Hocko , Vladimir Davydov Subject: [PATCH v15 12/21] mm/lru: introduce TestClearPageLRU Date: Tue, 7 Jul 2020 19:46:44 +0800 Message-Id: <1594122412-28057-13-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1594122412-28057-1-git-send-email-alex.shi@linux.alibaba.com> References: <1594122412-28057-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: A5307169F9A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Combine PageLRU check and ClearPageLRU into a function by new introduced func TestClearPageLRU. This function will be used as page isolation precondition to prevent other isolations some where else. Then there are may non PageLRU page on lru list, need to remove BUG checking accordingly. Hugh Dickins pointed that __page_cache_release and release_pages has no need to do atomic clear bit since no user on the page at that moment. and no need get_page() before lru bit clear in isolate_lru_page, since it '(1) Must be called with an elevated refcount on the page'. As Andrew Morton mentioned this change would dirty cacheline for page isn't on LRU. But the lost would be acceptable with Rong Chen report: https://lkml.org/lkml/2020/3/4/173 Suggested-by: Johannes Weiner Signed-off-by: Alex Shi Cc: Hugh Dickins Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/page-flags.h | 1 + mm/mlock.c | 3 +-- mm/swap.c | 6 ++---- mm/vmscan.c | 26 +++++++++++--------------- 4 files changed, 15 insertions(+), 21 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 6be1aa559b1e..9554ed1387dc 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -326,6 +326,7 @@ static inline void page_init_poison(struct page *page, size_t size) PAGEFLAG(Dirty, dirty, PF_HEAD) TESTSCFLAG(Dirty, dirty, PF_HEAD) __CLEARPAGEFLAG(Dirty, dirty, PF_HEAD) PAGEFLAG(LRU, lru, PF_HEAD) __CLEARPAGEFLAG(LRU, lru, PF_HEAD) + TESTCLEARFLAG(LRU, lru, PF_HEAD) PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) PAGEFLAG(Workingset, workingset, PF_HEAD) diff --git a/mm/mlock.c b/mm/mlock.c index f8736136fad7..228ba5a8e0a5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -108,13 +108,12 @@ void mlock_vma_page(struct page *page) */ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) { - if (PageLRU(page)) { + if (TestClearPageLRU(page)) { struct lruvec *lruvec; lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); if (getpage) get_page(page); - ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); return true; } diff --git a/mm/swap.c b/mm/swap.c index f645965fde0e..5092fe9c8c47 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -83,10 +83,9 @@ static void __page_cache_release(struct page *page) struct lruvec *lruvec; unsigned long flags; + __ClearPageLRU(page); spin_lock_irqsave(&pgdat->lru_lock, flags); lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } @@ -878,9 +877,8 @@ void release_pages(struct page **pages, int nr) spin_lock_irqsave(&locked_pgdat->lru_lock, flags); } - lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); - VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); + lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); del_page_from_lru_list(page, lruvec, page_off_lru(page)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index c1c4259b4de5..18986fefd49b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1548,16 +1548,16 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) { int ret = -EINVAL; - /* Only take pages on the LRU. */ - if (!PageLRU(page)) - return ret; - /* Compaction should not handle unevictable pages but CMA can do so */ if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) return ret; ret = -EBUSY; + /* Only take pages on the LRU. */ + if (!PageLRU(page)) + return ret; + /* * To minimise LRU disruption, the caller can indicate that it only * wants to isolate pages it will be able to operate on without @@ -1671,8 +1671,6 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, page = lru_to_page(src); prefetchw_prev_lru_page(page, src, flags); - VM_BUG_ON_PAGE(!PageLRU(page), page); - nr_pages = compound_nr(page); total_scan += nr_pages; @@ -1769,21 +1767,19 @@ int isolate_lru_page(struct page *page) VM_BUG_ON_PAGE(!page_count(page), page); WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); - if (PageLRU(page)) { + if (TestClearPageLRU(page)) { pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; + int lru = page_lru(page); - spin_lock_irq(&pgdat->lru_lock); + get_page(page); lruvec = mem_cgroup_page_lruvec(page, pgdat); - if (PageLRU(page)) { - int lru = page_lru(page); - get_page(page); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, lru); - ret = 0; - } + spin_lock_irq(&pgdat->lru_lock); + del_page_from_lru_list(page, lruvec, lru); spin_unlock_irq(&pgdat->lru_lock); + ret = 0; } + return ret; }