From patchwork Fri Sep 18 03:00:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11783931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66F37618 for ; Fri, 18 Sep 2020 03:01:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 298DE238EE for ; Fri, 18 Sep 2020 03:01:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="WbBF/biW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 298DE238EE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BB56E900003; Thu, 17 Sep 2020 23:01:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B42E08E0001; Thu, 17 Sep 2020 23:01:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97151900003; Thu, 17 Sep 2020 23:01:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 70EB78E0001 for ; Thu, 17 Sep 2020 23:01:11 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3FA522C96 for ; Fri, 18 Sep 2020 03:01:11 +0000 (UTC) X-FDA: 77274680742.15.grass50_310800127127 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 1E2711814B0C1 for ; Fri, 18 Sep 2020 03:01:11 +0000 (UTC) X-Spam-Summary: 1,0,0,386732c8ab12369b,d41d8cd98f00b204,39sjkxwykcmwgchzs6y66y3w.u64305cf-442dsu2.69y@flex--yuzhao.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2196:2199:2282:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3608:3865:3866:3867:3868:3871:3872:3874:4117:4250:4321:4385:4605:5007:6261:6653:6742:7875:8957:9010:9592:9969:10004:10400:11026:11473:11658:11914:12043:12114:12296:12297:12438:12555:12895:12986:14096:14097:14181:14394:14659:14721:21080:21444:21627:21987:21990:30054,0,RBL:209.85.219.201:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04y85t5uwogxnrwds9cokbbb48gqbop5xka6nw9e1qn97kjs4ypqx8qhd3jtrft.7zjsynr3gmjurw5kdo9jrsnfjyxmzcozs6er1j56x3eusb1i4a79s3np9np4w5k.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_ru les:0:0: X-HE-Tag: grass50_310800127127 X-Filterd-Recvd-Size: 6497 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 03:01:10 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id 207so3984435ybd.13 for ; Thu, 17 Sep 2020 20:01:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=mgUvRZoNyo/y0AQ5WR2jz5tZr6t85cJbSfg1V582xDg=; b=WbBF/biWXWgsyfQrPFvQ3vxOo+han7gBPOKVDaDU4vJiFk4GkLPWtgi2/g3SIc6Sm4 t3Uu+l1/J0OCujYFgUdgOOdQnLSuYKqjCbiVFAVL62xtqsR8AWYvU9xxCQbe5SNYfR7P 6qKIYnW/Nq+bYrVKiO/ZRYqZ9sujtxhKeqHLelTrl4oSZWzPFumT+IwW1EXoRdnknys4 9tdE11jEHtr50ERnioBOHFSTj/eepTvgUi5+qbGrY2XaOveyiAXErPRrap0CkToH0ECx xdvml7aQA0VQtDWIbFHHojN2KjO7YQ6KHO6zMR5goFxSrxt+PJ5zaHkJ1IMNTqEdb9d2 DZ6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mgUvRZoNyo/y0AQ5WR2jz5tZr6t85cJbSfg1V582xDg=; b=Gjoe1/5O+iO2UO1xEn948gLTbcx5eIfmAIuvbjtMZtj9Xelectmp0CF+INeGY7wYK+ Sqglc+Ie1P/Wrj4UYwu8iaYwpKuWLZoi5w+VGhWJKoiGcepR1Cxud9qLU1bmOzxYSHto HxCaLGfEKrYdn+kxL2kA90jLAj9OXdgEGVktYuKtHIxAqw6u/Tv/i0aieoukNaDnz75m MjH62/n7WBBGLC+rZyZ2kTCpkJnKPSKj9OxiBQQvEk0Kxyk2D1Ym4a0swZVD4vUNcXZh UFBwfk0sXiExH8nnW68nd6/NH9SZBBK57f+nAGoPAxpjd3DjsHxfTKqgWuCcPdLcM1dC Z0AA== X-Gm-Message-State: AOAM531UHl+xDG6IUOOnFWaprj0tJl5BNhQ7b4kWM+v6zn64GLlp0lyg 9QOUnq0P4Cnx6a3WSj30h4cL6XzLoRg= X-Google-Smtp-Source: ABdhPJw/NMQTsr5clEF7UbuRxrpPr6jti09YZRwK5a69aAftHh4i/FzywDR2H/UoJQ1/f0Yitz79mnIXNPs= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:ad43:: with SMTP id l3mr28473720ybe.157.1600398069895; Thu, 17 Sep 2020 20:01:09 -0700 (PDT) Date: Thu, 17 Sep 2020 21:00:46 -0600 In-Reply-To: <20200918030051.650890-1-yuzhao@google.com> Message-Id: <20200918030051.650890-9-yuzhao@google.com> Mime-Version: 1.0 References: <20200918030051.650890-1-yuzhao@google.com> X-Mailer: git-send-email 2.28.0.681.g6f77f65b4e-goog Subject: [PATCH 08/13] mm: rename page_off_lru() to __clear_page_lru_flags() From: Yu Zhao To: Andrew Morton , Michal Hocko Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Yafang Shao , Vlastimil Babka , Huang Ying , Pankaj Gupta , Matthew Wilcox , Konstantin Khlebnikov , Minchan Kim , Jaewon Kim , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename the function according to what it really does. And make it return void since the return value is not needed anymore. If PageActive() and PageUnevictable() are both true, refuse to clear either and leave them to bad_page(). Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 29 ++++++++++------------------- mm/swap.c | 4 ++-- mm/vmscan.c | 2 +- 3 files changed, 13 insertions(+), 22 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 03796021f0fe..ef3fd79222e5 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -61,28 +61,19 @@ static inline enum lru_list page_lru_base_type(struct page *page) } /** - * page_off_lru - which LRU list was page on? clearing its lru flags. - * @page: the page to test - * - * Returns the LRU list a page was on, as an index into the array of LRU - * lists; and clears its Unevictable or Active flags, ready for freeing. + * __clear_page_lru_flags - clear page lru flags before releasing a page + * @page: the page that was on lru and now has a zero reference */ -static __always_inline enum lru_list page_off_lru(struct page *page) +static __always_inline void __clear_page_lru_flags(struct page *page) { - enum lru_list lru; - __ClearPageLRU(page); - if (PageUnevictable(page)) { - __ClearPageUnevictable(page); - lru = LRU_UNEVICTABLE; - } else { - lru = page_lru_base_type(page); - if (PageActive(page)) { - __ClearPageActive(page); - lru += LRU_ACTIVE; - } - } - return lru; + + /* this shouldn't happen, so leave the flags to bad_page() */ + if (PageActive(page) && PageUnevictable(page)) + return; + + __ClearPageActive(page); + __ClearPageUnevictable(page); } /** diff --git a/mm/swap.c b/mm/swap.c index 8bbeabc582c1..b252f3593c57 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -87,7 +87,7 @@ static void __page_cache_release(struct page *page) lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); del_page_from_lru_list(page, lruvec); - page_off_lru(page); + __clear_page_lru_flags(page); spin_unlock_irqrestore(&pgdat->lru_lock, flags); } } @@ -887,7 +887,7 @@ void release_pages(struct page **pages, int nr) lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); del_page_from_lru_list(page, lruvec); - page_off_lru(page); + __clear_page_lru_flags(page); } list_add(&page->lru, &pages_to_free); diff --git a/mm/vmscan.c b/mm/vmscan.c index 47a4e8ba150f..d93033407200 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1862,7 +1862,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, if (put_page_testzero(page)) { del_page_from_lru_list(page, lruvec); - page_off_lru(page); + __clear_page_lru_flags(page); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock);