From patchwork Fri Sep 18 03:00:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11783929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2AC60618 for ; Fri, 18 Sep 2020 03:01:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C1A432396D for ; Fri, 18 Sep 2020 03:01:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="I56cheOd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1A432396D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35182900002; Thu, 17 Sep 2020 23:01:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2DA358E0001; Thu, 17 Sep 2020 23:01:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A54B900002; Thu, 17 Sep 2020 23:01:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id E79FF8E0001 for ; Thu, 17 Sep 2020 23:01:09 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B9D882492 for ; Fri, 18 Sep 2020 03:01:09 +0000 (UTC) X-FDA: 77274680658.09.cart61_050e72327127 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 9F99A180AD811 for ; Fri, 18 Sep 2020 03:01:09 +0000 (UTC) X-Spam-Summary: 1,0,0,57561e5368cb1be7,d41d8cd98f00b204,39cjkxwykcmsfbgyr5x55x2v.t532z4be-331crt1.58x@flex--yuzhao.bounces.google.com,,RULES_HIT:1:2:41:69:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2736:3138:3139:3140:3141:3142:3152:3865:3867:3868:3870:3871:3872:3874:4050:4321:4385:4605:5007:6261:6653:6742:7903:8957:9592:9969:10004:11026:11233:11473:11658:11914:12043:12114:12296:12297:12438:12555:12895:12986:13161:13229:14096:14097:14394:14659:21080:21444:21451:21627:21987:21990:30036:30054,0,RBL:209.85.222.201:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yrtr4a9n45eccb5xtc61wxdnajhyc7zapxyf6gnqw8i4qp3xh9cgouukwt8n5.ywf7q1pfbc3cis843iyedsiefnawreu49ejr7yygh3zunjqk3c7b6o4x3qnadpx.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules: 0:0:0,LF X-HE-Tag: cart61_050e72327127 X-Filterd-Recvd-Size: 10816 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 03:01:09 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id m203so3400335qke.16 for ; Thu, 17 Sep 2020 20:01:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=m9SV8To9giQpZ6vlTURw1wnUtOeRuWbkRD6idl4rWic=; b=I56cheOdKpadMiISI8D3cG12+YcWjTwNmGhUtTv6Hp9t9gRU4RRdYiIHsk/0jFXmyH cWLQJMpbcUAEISgmwVnWJRx8Pu5moOnRNObl8NrzMuG10t1uEVB/oDN6VbizahrHeDXp s5DdATUvkw8bZSQsKZpXhor/grLA+VmtK7Fwrpwgeoaj5Fi/6zRjPs29QE5WD+Qp3mjC 0c0DuuFKEYWQ4Bq4OBTGv1YstJIngd3P3uXY4KZulaz6XmM9C7zeHzTpFORS90uJ1AE2 MaMtBpBIKEb6OZ8KhFdnhm/F2mMDsPCbQpvQAsSnjz1u8pqDrunMQS6E4nbchxMKsV23 YXmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=m9SV8To9giQpZ6vlTURw1wnUtOeRuWbkRD6idl4rWic=; b=P/GLVfwELfGzZEOWOvyXwlQwFKUMRYFds0sXNPYTBq+N8J8cxQpr1jI6Nkj5QEXfKq /Bk4FmaTBVClXh0ejRlA5/RQeq6gSYtZofOKqek+uF4kazvdZ8OgkOe22fyS57npHTnq p/kp1AU/qnOXtBlZrRS+tl9AaH/hQulXCwLRq5vndsr1N+kCh3pQ6MIQw4z84HS4Cz/R wpeLpA5Fh00coxhge7xlYx/FU0V8qPsESGPQUtP0ygDbLLPu6UGC2SviLlQV4paLD3oY gcrfYl8QbIo9bqvqOdEvM2AQtI18l6iF0KZ1E2uNlz2NabYDNX9d86Yoqh+oJWQEg4sV QSwQ== X-Gm-Message-State: AOAM531C+Ne9MX69JW4sz68PBMVpIEVe8xB6WKYitCgxrOjckrPLzlPz 5IZ+W5NVVyvVSTSGi7rP5kznt/yzqZ8= X-Google-Smtp-Source: ABdhPJxSJ6H+qBap6jb3jm+pH6Y0Qsceyc8/VfY7SdRpbY0psSRlj8jZwDgLG/bk2SjMsAhDHvZHEM0GqJw= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a0c:c407:: with SMTP id r7mr15393093qvi.36.1600398068453; Thu, 17 Sep 2020 20:01:08 -0700 (PDT) Date: Thu, 17 Sep 2020 21:00:45 -0600 In-Reply-To: <20200918030051.650890-1-yuzhao@google.com> Message-Id: <20200918030051.650890-8-yuzhao@google.com> Mime-Version: 1.0 References: <20200918030051.650890-1-yuzhao@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 07/13] mm: don't pass enum lru_list to del_page_from_lru_list() From: Yu Zhao To: Andrew Morton , Michal Hocko Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Yafang Shao , Vlastimil Babka , Huang Ying , Pankaj Gupta , Matthew Wilcox , Konstantin Khlebnikov , Minchan Kim , Jaewon Kim , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The parameter is redundant in the sense that it can be extracted from the struct page parameter by page_lru(). To do this, we need to make sure PageActive() or PageUnevictable() is correctly set or cleared before calling the function. In check_move_unevictable_pages(), we have: ClearPageUnevictable() del_page_from_lru_list(lru_list = LRU_UNEVICTABLE) And we need to reorder them to make page_lru() return LRU_UNEVICTABLE: del_page_from_lru_list() page_lru() ClearPageUnevictable() We also need to deal with the deletions on releasing paths that clear PageLRU() and PageActive()/PageUnevictable(): del_page_from_lru_list(lru_list = page_off_lru()) It's done by another recording like this: del_page_from_lru_list() page_lru() page_off_lru() In both cases, the recording should have no side effects. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 5 +++-- mm/compaction.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 26 ++++++++++---------------- mm/vmscan.c | 8 ++++---- 5 files changed, 19 insertions(+), 24 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 199ff51bf2a0..03796021f0fe 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -125,9 +125,10 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, } static __always_inline void del_page_from_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { list_del(&page->lru); - update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page)); + update_lru_size(lruvec, page_lru(page), page_zonenum(page), + -thp_nr_pages(page)); } #endif diff --git a/mm/compaction.c b/mm/compaction.c index 176dcded298e..ec4af21d2867 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1006,7 +1006,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, low_pfn += compound_nr(page) - 1; /* Successfully isolated */ - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), thp_nr_pages(page)); diff --git a/mm/mlock.c b/mm/mlock.c index 93ca2bf30b4f..647487912d0a 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -114,7 +114,7 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) if (getpage) get_page(page); ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); return true; } diff --git a/mm/swap.c b/mm/swap.c index 3c89a7276359..8bbeabc582c1 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -86,7 +86,8 @@ static void __page_cache_release(struct page *page) spin_lock_irqsave(&pgdat->lru_lock, flags); lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } } @@ -236,7 +237,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, int *pgmoved = arg; if (PageLRU(page) && !PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec); (*pgmoved) += thp_nr_pages(page); @@ -317,10 +318,9 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec); SetPageActive(page); add_page_to_lru_list(page, lruvec); trace_mm_lru_activate(page); @@ -527,8 +527,7 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, void *arg) { - int lru; - bool active; + bool active = PageActive(page); int nr_pages = thp_nr_pages(page); if (!PageLRU(page)) @@ -541,10 +540,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, if (page_mapped(page)) return; - active = PageActive(page); - lru = page_lru_base_type(page); - - del_page_from_lru_list(page, lruvec, lru + active); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); @@ -576,10 +572,9 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec); @@ -595,11 +590,9 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { - bool active = PageActive(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, - LRU_INACTIVE_ANON + active); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); /* @@ -893,7 +886,8 @@ void release_pages(struct page **pages, int nr) lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); } list_add(&page->lru, &pages_to_free); diff --git a/mm/vmscan.c b/mm/vmscan.c index 895be9fb96ec..47a4e8ba150f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1770,10 +1770,9 @@ int isolate_lru_page(struct page *page) spin_lock_irq(&pgdat->lru_lock); lruvec = mem_cgroup_page_lruvec(page, pgdat); if (PageLRU(page)) { - int lru = page_lru(page); get_page(page); ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec); ret = 0; } spin_unlock_irq(&pgdat->lru_lock); @@ -1862,7 +1861,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, add_page_to_lru_list(page, lruvec); if (put_page_testzero(page)) { - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); @@ -4277,8 +4277,8 @@ void check_move_unevictable_pages(struct pagevec *pvec) if (page_evictable(page)) { VM_BUG_ON_PAGE(PageActive(page), page); + del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); - del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); add_page_to_lru_list(page, lruvec); pgrescued++; }