From patchwork Thu Jan 20 13:10:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 12718614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15AFEC433F5 for ; Thu, 20 Jan 2022 13:11:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BB056B009C; Thu, 20 Jan 2022 08:11:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 96A226B009D; Thu, 20 Jan 2022 08:11:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 832096B009E; Thu, 20 Jan 2022 08:11:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 7567F6B009C for ; Thu, 20 Jan 2022 08:11:01 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2ECE9951BA for ; Thu, 20 Jan 2022 13:11:01 +0000 (UTC) X-FDA: 79050700722.22.D41F1E9 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf03.hostedemail.com (Postfix) with ESMTP id 5EA3F20002 for ; Thu, 20 Jan 2022 13:10:57 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 82CC8616CD; Thu, 20 Jan 2022 13:10:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0764CC340E0; Thu, 20 Jan 2022 13:10:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642684256; bh=RPytYW4cUkzX5lKrjMDB5tI8NLNX7cdMDRiOGw0yy0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mN4yiBGKKLuHK08vrum6hGew7gy+Vc47wSbUzgk/rCRGx8GJpT3miA0AKkv0yfHAd suypRZZbFlTIuLMfmx2rMnFbfW5bT3mP3gIVx+4tiTzyjVi1LGIzDYmKl37SY2q5KS fFykeVa1T2fl+TGoTSw+sKNNCPka2MKxHiZwJvK5WpCZF28c8IYZ2OVsNNTqR3vmqs 4M9J7/v5Bb88CbFKWQmHNZOO6oFm/+mmOPSxMQXJq2CsAkU1INtnjbl3HB9adL6Pm4 f4/w+RYJnKP+3U1NgncJ5ceGfVVsVRasCkF5m5zzfr+gYGH2qNLGNaI+Lp1BrKAvhJ VapLgFpTEElmA== From: alexs@kernel.org To: Andrew Morton Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Naoya Horiguchi , Yu Zhao , Arnd Bergmann , Vlastimil Babka , Mel Gorman , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/5] mm: remove page_is_file_lru function Date: Thu, 20 Jan 2022 21:10:20 +0800 Message-Id: <20220120131024.502877-2-alexs@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120131024.502877-1-alexs@kernel.org> References: <20220120131024.502877-1-alexs@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5EA3F20002 X-Stat-Signature: 9mwjzsa9d8zfxknw5zqyhuryx1nznxic Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=mN4yiBGK; spf=pass (imf03.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1642684257-407036 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi This function could be full replaced by folio_is_file_lru, so no reason to keep a duplicate function. Signed-off-by: Alex Shi Cc: Steven Rostedt Cc: Ingo Molnar Cc: Andrew Morton Cc: Naoya Horiguchi Cc: Yu Zhao Cc: Arnd Bergmann Cc: Vlastimil Babka Cc: Mel Gorman Cc: Johannes Weiner Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm_inline.h | 5 ----- include/trace/events/vmscan.h | 2 +- mm/compaction.c | 2 +- mm/gup.c | 2 +- mm/khugepaged.c | 4 ++-- mm/memory-failure.c | 2 +- mm/memory_hotplug.c | 2 +- mm/mempolicy.c | 2 +- mm/migrate.c | 14 +++++++------- mm/mprotect.c | 2 +- mm/vmscan.c | 13 +++++++------ 11 files changed, 23 insertions(+), 27 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index b725839dfe71..f0aa34b0f2c4 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -27,11 +27,6 @@ static inline int folio_is_file_lru(struct folio *folio) return !folio_test_swapbacked(folio); } -static inline int page_is_file_lru(struct page *page) -{ - return folio_is_file_lru(page_folio(page)); -} - static __always_inline void update_lru_size(struct lruvec *lruvec, enum lru_list lru, enum zone_type zid, long nr_pages) diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index ca2e9009a651..51a2b1766b05 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -341,7 +341,7 @@ TRACE_EVENT(mm_vmscan_writepage, TP_fast_assign( __entry->pfn = page_to_pfn(page); __entry->reclaim_flags = trace_reclaim_flags( - page_is_file_lru(page)); + folio_is_file_lru(page_folio(page))); ), TP_printk("page=%p pfn=0x%lx flags=%s", diff --git a/mm/compaction.c b/mm/compaction.c index b4e94cda3019..12f2af6ac484 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1066,7 +1066,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Successfully isolated */ del_page_from_lru_list(page, lruvec); mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_is_file_lru(page), + NR_ISOLATED_ANON + folio_is_file_lru(page_folio(page)), thp_nr_pages(page)); isolate_success: diff --git a/mm/gup.c b/mm/gup.c index f4c7645ccf8f..f3fea1efc2e2 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1900,7 +1900,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages, list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + - page_is_file_lru(head), + folio_is_file_lru(page_folio(head)), thp_nr_pages(head)); } } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 35f14d0a00a6..8caed4089242 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -561,7 +561,7 @@ void __khugepaged_exit(struct mm_struct *mm) static void release_pte_page(struct page *page) { mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_is_file_lru(page), + NR_ISOLATED_ANON + folio_is_file_lru(page_folio(page)), -compound_nr(page)); unlock_page(page); putback_lru_page(page); @@ -703,7 +703,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, goto out; } mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_is_file_lru(page), + NR_ISOLATED_ANON + folio_is_file_lru(page_folio(page)), compound_nr(page)); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 14ae5c18e776..9405388ab852 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2113,7 +2113,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist) if (isolated && lru) inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_lru(page)); + folio_is_file_lru(page_folio(page))); /* * If we succeed to isolate the page, we grabbed another refcount on diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 0139b77c51d5..94b0d14da0af 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1731,7 +1731,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) list_add_tail(&page->lru, &source); if (!__PageMovable(page)) inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_lru(page)); + folio_is_file_lru(page_folio(page))); } else { if (__ratelimit(&migrate_rs)) { diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a86590b2507d..f5c3c86d7c31 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1032,7 +1032,7 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, if (!isolate_lru_page(head)) { list_add_tail(&head->lru, pagelist); mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_lru(head), + NR_ISOLATED_ANON + folio_is_file_lru(page_folio(head)), thp_nr_pages(head)); } else if (flags & MPOL_MF_STRICT) { /* diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..bdd7425556db 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -164,7 +164,7 @@ void putback_movable_pages(struct list_head *l) put_page(page); } else { mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -thp_nr_pages(page)); + folio_is_file_lru(page_folio(page)), -thp_nr_pages(page)); putback_lru_page(page); } } @@ -1129,7 +1129,7 @@ static int unmap_and_move(new_page_t get_new_page, */ if (likely(!__PageMovable(page))) mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -thp_nr_pages(page)); + folio_is_file_lru(page_folio(page)), -thp_nr_pages(page)); if (reason != MR_MEMORY_FAILURE) /* @@ -1657,7 +1657,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, err = 1; list_add_tail(&head->lru, pagelist); mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_lru(head), + NR_ISOLATED_ANON + folio_is_file_lru(page_folio(head)), thp_nr_pages(head)); } out_putpage: @@ -2048,7 +2048,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) if (isolate_lru_page(page)) return 0; - page_lru = page_is_file_lru(page); + page_lru = folio_is_file_lru(page_folio(page)); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru, nr_pages); @@ -2093,7 +2093,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, * Don't migrate file pages that are mapped in multiple processes * with execute permissions as they are probably shared libraries. */ - if (page_mapcount(page) != 1 && page_is_file_lru(page) && + if (page_mapcount(page) != 1 && folio_is_file_lru(page_folio(page)) && (vma->vm_flags & VM_EXEC)) goto out; @@ -2101,7 +2101,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, * Also do not migrate dirty pages as not all filesystems can move * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. */ - if (page_is_file_lru(page) && PageDirty(page)) + if (folio_is_file_lru(page_folio(page)) && PageDirty(page)) goto out; isolated = numamigrate_isolate_page(pgdat, page); @@ -2115,7 +2115,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, if (!list_empty(&migratepages)) { list_del(&page->lru); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -nr_pages); + folio_is_file_lru(page_folio(page)), -nr_pages); putback_lru_page(page); } isolated = 0; diff --git a/mm/mprotect.c b/mm/mprotect.c index 0138dfcdb1d8..31d1270deb4f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -102,7 +102,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, * it cannot move them all from MIGRATE_ASYNC * context. */ - if (page_is_file_lru(page) && PageDirty(page)) + if (folio_is_file_lru(page_folio(page)) && PageDirty(page)) continue; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 0dbfa3a69567..c361973774b4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1311,7 +1311,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, * exceptional entries and shadow exceptional entries in the * same address_space. */ - if (reclaimed && page_is_file_lru(page) && + if (reclaimed && folio_is_file_lru(page_folio(page)) && !mapping_exiting(mapping) && !dax_mapping(mapping)) shadow = workingset_eviction(page, target_memcg); __delete_from_page_cache(page, shadow); @@ -1438,7 +1438,7 @@ static void page_check_dirty_writeback(struct page *page, * Anonymous pages are not handled by flushers and must be written * from reclaim context. Do not stall reclaim based on them */ - if (!page_is_file_lru(page) || + if (!folio_is_file_lru(page_folio(page)) || (PageAnon(page) && !PageSwapBacked(page))) { *dirty = false; *writeback = false; @@ -1777,7 +1777,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, * the rest of the LRU for clean pages and see * the same dirty pages again (PageReclaim). */ - if (page_is_file_lru(page) && + if (folio_is_file_lru(page_folio(page)) && (!current_is_kswapd() || !PageReclaim(page) || !test_bit(PGDAT_DIRTY, &pgdat->flags))) { /* @@ -1927,7 +1927,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, try_to_free_swap(page); VM_BUG_ON_PAGE(PageActive(page), page); if (!PageMlocked(page)) { - int type = page_is_file_lru(page); + int type = folio_is_file_lru(page_folio(page)); SetPageActive(page); stat->nr_activate[type] += nr_pages; count_memcg_page_event(page, PGACTIVATE); @@ -1976,7 +1976,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, unsigned int noreclaim_flag; list_for_each_entry_safe(page, next, page_list, lru) { - if (!PageHuge(page) && page_is_file_lru(page) && + if (!PageHuge(page) && folio_is_file_lru(page_folio(page)) && !PageDirty(page) && !__PageMovable(page) && !PageUnevictable(page)) { ClearPageActive(page); @@ -2555,7 +2555,8 @@ static void shrink_active_list(unsigned long nr_to_scan, * IO, plus JVM can create lots of anon VM_EXEC pages, * so we ignore them here. */ - if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) { + if ((vm_flags & VM_EXEC) && + folio_is_file_lru(page_folio(page))) { nr_rotated += thp_nr_pages(page); list_add(&page->lru, &l_active); continue; From patchwork Thu Jan 20 13:10:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 12718615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F37B6C4332F for ; Thu, 20 Jan 2022 13:11:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26A406B009D; Thu, 20 Jan 2022 08:11:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2175A6B009E; Thu, 20 Jan 2022 08:11:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12D596B009F; Thu, 20 Jan 2022 08:11:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 05AA56B009D for ; Thu, 20 Jan 2022 08:11:02 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BD54920AB5 for ; Thu, 20 Jan 2022 13:11:01 +0000 (UTC) X-FDA: 79050700722.03.6BDFF5D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf29.hostedemail.com (Postfix) with ESMTP id 5971312005F for ; Thu, 20 Jan 2022 13:11:00 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A1EB06171C; Thu, 20 Jan 2022 13:10:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C35A2C340E2; Thu, 20 Jan 2022 13:10:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642684259; bh=b+tRqGpXikoPLthE/HgxBkFyjXAq1yPdeoFS7WY6SrE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ATtlz/W5UoACydVrU+nbgN08od1PSQVib9LIkZRlbqVd8cEuKd+ciUqkbNZif5DT7 CREIzttbg4AUCHewm8oLt5aeJnfEKjHZhLhui8/jqWoOEdcV9oBJVnOMGJlsu/bEk3 vQQ4kMEQEZ/NvF9iTpizMSkJ9eUYyoPLWFF9aO0uV8Bbqi47dJhgOH7IGpSKf0tlwf 9aQIyucyaBpHkapj56vtbiRxeQepa5u7oSTMG0Z5i2PZ6KXUrZmQWyDajsUATpA34B lurdQWtzqvIwW1X/DYdv3KrCvbGP6y7bKxjwRGhF1prS+OVUaGUZ/vTdQBXeovvYV8 clxpnpDscorJA== From: alexs@kernel.org To: Andrew Morton Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Naoya Horiguchi , Yu Zhao , Arnd Bergmann , Vlastimil Babka , Mel Gorman , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/5] mm: remove __clear_page_lru_flags() Date: Thu, 20 Jan 2022 21:10:21 +0800 Message-Id: <20220120131024.502877-3-alexs@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120131024.502877-1-alexs@kernel.org> References: <20220120131024.502877-1-alexs@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 5971312005F X-Stat-Signature: 94cudw11a44nafbjtnefz6h1hzyhxonz Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="ATtlz/W5"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf29.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org X-HE-Tag: 1642684260-737680 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi The function could be fully replaced by __folio_clear_lru_flags(), no reason to keep a duplicate one. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Yu Zhao Cc: Alex Shi Cc: Vlastimil Babka Cc: Arnd Bergmann Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm_inline.h | 5 ----- mm/swap.c | 4 ++-- mm/vmscan.c | 2 +- 3 files changed, 3 insertions(+), 8 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f0aa34b0f2c4..c2384da888b4 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -59,11 +59,6 @@ static __always_inline void __folio_clear_lru_flags(struct folio *folio) __folio_clear_unevictable(folio); } -static __always_inline void __clear_page_lru_flags(struct page *page) -{ - __folio_clear_lru_flags(page_folio(page)); -} - /** * folio_lru_list - Which LRU list should a folio be on? * @folio: The folio to test. diff --git a/mm/swap.c b/mm/swap.c index bcf3ac288b56..953cf8860542 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -86,7 +86,7 @@ static void __page_cache_release(struct page *page) lruvec = folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); - __clear_page_lru_flags(page); + __folio_clear_lru_flags(page_folio(page)); unlock_page_lruvec_irqrestore(lruvec, flags); } __ClearPageWaiters(page); @@ -966,7 +966,7 @@ void release_pages(struct page **pages, int nr) lock_batch = 0; del_page_from_lru_list(page, lruvec); - __clear_page_lru_flags(page); + __folio_clear_lru_flags(page_folio(page)); } __ClearPageWaiters(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index c361973774b4..59a52ba8b52a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2337,7 +2337,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, SetPageLRU(page); if (unlikely(put_page_testzero(page))) { - __clear_page_lru_flags(page); + __folio_clear_lru_flags(page_folio(page)); if (unlikely(PageCompound(page))) { spin_unlock_irq(&lruvec->lru_lock); From patchwork Thu Jan 20 13:10:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 12718617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D12EC433F5 for ; Thu, 20 Jan 2022 13:11:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF0646B009E; Thu, 20 Jan 2022 08:11:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C7DEF6B00A0; Thu, 20 Jan 2022 08:11:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B220C6B00A2; Thu, 20 Jan 2022 08:11:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay028.a.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 950EF6B009E for ; Thu, 20 Jan 2022 08:11:06 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5B66120AFC for ; Thu, 20 Jan 2022 13:11:06 +0000 (UTC) X-FDA: 79050700932.03.F2A60E0 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf24.hostedemail.com (Postfix) with ESMTP id D1CAD180043 for ; Thu, 20 Jan 2022 13:11:04 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1BAE1B81CE2; Thu, 20 Jan 2022 13:11:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8150CC340E7; Thu, 20 Jan 2022 13:10:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642684261; bh=trAm6qhV2AxtHG/93R7Wnd9+hGYx/SylLR/GP3OdBf8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oiD1Cl3MrN4U8YuD+9BRofhf9vP1ui5TG1iJHLB3lIG67n29ypCdGsnv5aBMw/37f NJSTKlXZw5xN6AVwtK4Mz18hRdL0R7NsYvZhlwI8a5V8vXsOmzhAyjUmH5FMfzTVI2 nNWJ5jAVQrRr26jctDAkiPJFZVTxpM1lmv23kxOKg4CfdMrWoqMbv1/ocY1sjzMoM5 gIGL30/XiNjU3Mp2AslYoSOO+s5cMI/BcFoiq4UbrMssE6iObNNJa3kV47RSqDVrH/ +jENH16w6tg9pFDAmn3juqBlVkKyP09yU4BoG26llAbMgfxqgh3eazLR83AbPCYwU6 0AZsK9joQyI9g== From: alexs@kernel.org To: Andrew Morton Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Naoya Horiguchi , Yu Zhao , Arnd Bergmann , Vlastimil Babka , Mel Gorman , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/5] mm: remove add_page_to_lru_list() function Date: Thu, 20 Jan 2022 21:10:22 +0800 Message-Id: <20220120131024.502877-4-alexs@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120131024.502877-1-alexs@kernel.org> References: <20220120131024.502877-1-alexs@kernel.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=oiD1Cl3M; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of alexs@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=alexs@kernel.org X-Stat-Signature: pre59xu96tkq8twm49icy9obbtirs9fy X-Rspamd-Queue-Id: D1CAD180043 X-Rspamd-Server: rspam12 X-HE-Tag: 1642684264-598181 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi The function could be fully replaced by ruvec_add_folio(), no reason to keep a duplicate one. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Yu Zhao Cc: Alex Shi Cc: Vlastimil Babka Cc: Arnd Bergmann Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm_inline.h | 6 ------ mm/swap.c | 6 +++--- mm/vmscan.c | 4 ++-- 3 files changed, 5 insertions(+), 11 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index c2384da888b4..7d7abd5ff73f 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -92,12 +92,6 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio) list_add(&folio->lru, &lruvec->lists[lru]); } -static __always_inline void add_page_to_lru_list(struct page *page, - struct lruvec *lruvec) -{ - lruvec_add_folio(lruvec, page_folio(page)); -} - static __always_inline void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) { diff --git a/mm/swap.c b/mm/swap.c index 953cf8860542..fb101a06dce4 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -543,7 +543,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) * It can make readahead confusing. But race window * is _really_ small and it's non-critical problem. */ - add_page_to_lru_list(page, lruvec); + lruvec_add_folio(lruvec, page_folio(page)); SetPageReclaim(page); } else { /* @@ -569,7 +569,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); - add_page_to_lru_list(page, lruvec); + lruvec_add_folio(lruvec, page_folio(page)); __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, @@ -592,7 +592,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) * anonymous pages */ ClearPageSwapBacked(page); - add_page_to_lru_list(page, lruvec); + lruvec_add_folio(lruvec, page_folio(page)); __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, diff --git a/mm/vmscan.c b/mm/vmscan.c index 59a52ba8b52a..f09473c9ff35 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2354,7 +2354,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec, * inhibits memcg migration). */ VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); - add_page_to_lru_list(page, lruvec); + lruvec_add_folio(lruvec, page_folio(page)); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; if (PageActive(page)) @@ -4875,7 +4875,7 @@ void check_move_unevictable_pages(struct pagevec *pvec) if (page_evictable(page) && PageUnevictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + lruvec_add_folio(lruvec, page_folio(page)); pgrescued += nr_pages; } SetPageLRU(page); From patchwork Thu Jan 20 13:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 12718616 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24655C433FE for ; Thu, 20 Jan 2022 13:11:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FE8C6B009F; Thu, 20 Jan 2022 08:11:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E3876B00A1; Thu, 20 Jan 2022 08:11:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 881546B00A0; Thu, 20 Jan 2022 08:11:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 7A4D96B009E for ; Thu, 20 Jan 2022 08:11:06 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3EBA78EBFC for ; Thu, 20 Jan 2022 13:11:06 +0000 (UTC) X-FDA: 79050700932.19.F638D1B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf15.hostedemail.com (Postfix) with ESMTP id D2DD5A0016 for ; Thu, 20 Jan 2022 13:11:05 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2887D6176C; Thu, 20 Jan 2022 13:11:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 430E3C340E2; Thu, 20 Jan 2022 13:11:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642684265; bh=5gRZ+S+6LIfd4teIFYxNtsVZfUSw171Ca0JEDrDLF/4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mlw3GAC5koWopaGm68URxXemQh8HaLQ0Zmq2CDL1a9nuYPXys51YP+o6rphTReBWc FVs0E0OcDxx6mPeYGbOwoi/HKnvTuZPqk7Pyer66k7x3NyKGIfWzhZlchcJG5xf2al LZy2u4vjzz5wp+8QyTslZHS1gDEUdXryQxSQcfNEkvoHrpAULEoAt3tEhIMvn7YG19 M53SChm2cVYrTY1uG4/6ruUKvMZmegnfRiafGg8vnLP2yILI66V/lxuRGqqsOVvvmC aaevXnSJpk1kiDzpBuOQ77V6Zpe3fjndW7Mdc6IkjQqpIwAcj40iXBW6LJbzTtFTPk n+xBwaR9l6nLw== From: alexs@kernel.org To: Andrew Morton Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Naoya Horiguchi , Yu Zhao , Arnd Bergmann , Vlastimil Babka , Mel Gorman , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/5] mm: remove add_page_to_lru_list_tail() Date: Thu, 20 Jan 2022 21:10:23 +0800 Message-Id: <20220120131024.502877-5-alexs@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120131024.502877-1-alexs@kernel.org> References: <20220120131024.502877-1-alexs@kernel.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Mlw3GAC5; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org X-Stat-Signature: y8utfcywb4r88kws8pxwnb7czf8kr1co X-Rspamd-Queue-Id: D2DD5A0016 X-Rspamd-Server: rspam12 X-HE-Tag: 1642684265-311113 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi The function could be fully replaced by lruvec_add_folio_tail, no reason to keep a duplicate one. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Yu Zhao Cc: Alex Shi Cc: Vlastimil Babka Cc: Arnd Bergmann Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm_inline.h | 6 ------ mm/swap.c | 2 +- 2 files changed, 1 insertion(+), 7 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 7d7abd5ff73f..4df5b39cc97b 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -102,12 +102,6 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio) list_add_tail(&folio->lru, &lruvec->lists[lru]); } -static __always_inline void add_page_to_lru_list_tail(struct page *page, - struct lruvec *lruvec) -{ - lruvec_add_folio_tail(lruvec, page_folio(page)); -} - static __always_inline void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) { diff --git a/mm/swap.c b/mm/swap.c index fb101a06dce4..23c0afb76be6 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -550,7 +550,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) * The page's writeback ends up during pagevec * We move that page into tail of inactive. */ - add_page_to_lru_list_tail(page, lruvec); + lruvec_add_folio_tail(lruvec, page_folio(page)); __count_vm_events(PGROTATED, nr_pages); } From patchwork Thu Jan 20 13:10:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 12718618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2DD5C433FE for ; Thu, 20 Jan 2022 13:11:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 157216B00A1; Thu, 20 Jan 2022 08:11:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 107856B00A2; Thu, 20 Jan 2022 08:11:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE8426B00A4; Thu, 20 Jan 2022 08:11:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id E0F1E6B00A1 for ; Thu, 20 Jan 2022 08:11:09 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A24C09095A for ; Thu, 20 Jan 2022 13:11:09 +0000 (UTC) X-FDA: 79050701058.13.A8B723C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf16.hostedemail.com (Postfix) with ESMTP id 178B1180014 for ; Thu, 20 Jan 2022 13:11:08 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4D39F61757; Thu, 20 Jan 2022 13:11:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69FAEC340E5; Thu, 20 Jan 2022 13:11:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642684268; bh=OrxCmCwhcQs7JjUADDB5MvCyW968j/WDiJS5l2hCkFI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CI1V8t3qE2cDKMYaaBkvTZQ4d//l97YE4n8brHshtJvbP/v6FjyNy+ic/WcgigQTn 3w7nQsMbtkZvZCYVAniIkVkmgYo9UYEIXE5b2Y/QBSm6plbJ0RScCtx88FqPjP2RtL mb6VRPaznEN5W1nVOhuGul5mvnG5FQjFVukWA5v6Tnq5HTQDDgSzGtr2syk6sTpqX8 LnxSonhHwm6RByHfWGTphrWwlo2jeJXFIETbzjIFgidL5k8wDdAlEO1EksSGmyVtyJ DLZx+POdELc6DrLiLF1ue5A+rPF42FGxvCAQjEqJA11NpaG/JhiebCraYlZgYfMENC Geib0IgHSKKiw== From: alexs@kernel.org To: Andrew Morton Cc: Alex Shi , Steven Rostedt , Ingo Molnar , Naoya Horiguchi , Yu Zhao , Arnd Bergmann , Vlastimil Babka , Mel Gorman , Johannes Weiner , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/5] mm: remove del_page_from_lru_list() Date: Thu, 20 Jan 2022 21:10:24 +0800 Message-Id: <20220120131024.502877-6-alexs@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120131024.502877-1-alexs@kernel.org> References: <20220120131024.502877-1-alexs@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 178B1180014 X-Stat-Signature: jmjtigbsxneujkkdb731aq7mrdyaayfk Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=CI1V8t3q; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf16.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org X-HE-Tag: 1642684268-970207 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Shi The function could be fully replaced by lruvec_del_folio(), no reason to keep a duplicate one. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Yu Zhao Cc: Alex Shi Cc: Arnd Bergmann Cc: Vlastimil Babka Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm_inline.h | 6 ------ mm/compaction.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 10 +++++----- mm/vmscan.c | 4 ++-- 5 files changed, 9 insertions(+), 15 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 4df5b39cc97b..a66c08079675 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -110,12 +110,6 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio) -folio_nr_pages(folio)); } -static __always_inline void del_page_from_lru_list(struct page *page, - struct lruvec *lruvec) -{ - lruvec_del_folio(lruvec, page_folio(page)); -} - #ifdef CONFIG_ANON_VMA_NAME /* * mmap_lock should be read-locked when calling vma_anon_name() and while using diff --git a/mm/compaction.c b/mm/compaction.c index 12f2af6ac484..385e0bb7aad5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1064,7 +1064,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, low_pfn += compound_nr(page) - 1; /* Successfully isolated */ - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + folio_is_file_lru(page_folio(page)), thp_nr_pages(page)); diff --git a/mm/mlock.c b/mm/mlock.c index 8f584eddd305..6b64758b5d8c 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -280,7 +280,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) */ if (TestClearPageLRU(page)) { lruvec = folio_lruvec_relock_irq(folio, lruvec); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); continue; } else __munlock_isolation_failed(page); diff --git a/mm/swap.c b/mm/swap.c index 23c0afb76be6..359821740e0f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -85,7 +85,7 @@ static void __page_cache_release(struct page *page) unsigned long flags; lruvec = folio_lruvec_lock_irqsave(folio, &flags); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); __folio_clear_lru_flags(page_folio(page)); unlock_page_lruvec_irqrestore(lruvec, flags); } @@ -533,7 +533,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) if (page_mapped(page)) return; - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); ClearPageActive(page); ClearPageReferenced(page); @@ -566,7 +566,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) if (PageActive(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); ClearPageActive(page); ClearPageReferenced(page); lruvec_add_folio(lruvec, page_folio(page)); @@ -583,7 +583,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) !PageSwapCache(page) && !PageUnevictable(page)) { int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); ClearPageActive(page); ClearPageReferenced(page); /* @@ -965,7 +965,7 @@ void release_pages(struct page **pages, int nr) if (prev_lruvec != lruvec) lock_batch = 0; - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); __folio_clear_lru_flags(page_folio(page)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index f09473c9ff35..8ab97eac284a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2247,7 +2247,7 @@ int isolate_lru_page(struct page *page) get_page(page); lruvec = folio_lruvec_lock_irq(folio); - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); unlock_page_lruvec_irq(lruvec); ret = 0; } @@ -4873,7 +4873,7 @@ void check_move_unevictable_pages(struct pagevec *pvec) lruvec = folio_lruvec_relock_irq(folio, lruvec); if (page_evictable(page) && PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); + lruvec_del_folio(lruvec, page_folio(page)); ClearPageUnevictable(page); lruvec_add_folio(lruvec, page_folio(page)); pgrescued += nr_pages;