From patchwork Fri Nov 3 07:29:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13444381 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76A0B6FDA for ; Fri, 3 Nov 2023 07:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=none Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32DB91BF; Fri, 3 Nov 2023 00:29:11 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4SMC314Pj1z1P7ly; Fri, 3 Nov 2023 15:26:05 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 3 Nov 2023 15:29:08 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH 1/5] mm: huge_memory: use more folio api in __split_huge_page_tail() Date: Fri, 3 Nov 2023 15:29:02 +0800 Message-ID: <20231103072906.2000381-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> References: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Use more folio APIs to save six compound_head() calls in __split_huge_page_tail(). Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f31f02472396..34001ef9d029 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2507,13 +2507,13 @@ static void __split_huge_page_tail(struct folio *folio, int tail, clear_compound_head(page_tail); /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, 1 + (!PageAnon(head) || - PageSwapCache(head))); + page_ref_unfreeze(page_tail, 1 + (!folio_test_anon(folio) || + folio_test_swapcache(folio))); - if (page_is_young(head)) - set_page_young(page_tail); - if (page_is_idle(head)) - set_page_idle(page_tail); + if (folio_test_young(folio)) + folio_set_young(new_folio); + if (folio_test_idle(folio)) + folio_set_idle(new_folio); folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); From patchwork Fri Nov 3 07:29:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13444380 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B1CB6FCE for ; Fri, 3 Nov 2023 07:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=none Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07B7518E; Fri, 3 Nov 2023 00:29:12 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SMC1p61SqzPnmj; Fri, 3 Nov 2023 15:25:02 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 3 Nov 2023 15:29:09 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH 2/5] mm: task_mmu: use a folio in smaps_account() Date: Fri, 3 Nov 2023 15:29:03 +0800 Message-ID: <20231103072906.2000381-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> References: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Replace seven implicit calls to compound_head() with one page_folio(). Signed-off-by: Kefeng Wang --- fs/proc/task_mmu.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index ef2eb12906da..5ec06fee1f14 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -445,23 +445,25 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, { int i, nr = compound ? compound_nr(page) : 1; unsigned long size = nr * PAGE_SIZE; + struct folio *folio = page_folio(page); /* * First accumulate quantities that depend only on |size| and the type * of the compound page. */ - if (PageAnon(page)) { + if (folio_test_anon(folio)) { mss->anonymous += size; - if (!PageSwapBacked(page) && !dirty && !PageDirty(page)) + if (!folio_test_swapbacked(folio) && !dirty && + !folio_test_dirty(folio)) mss->lazyfree += size; } - if (PageKsm(page)) + if (folio_test_ksm(folio)) mss->ksm += size; mss->resident += size; /* Accumulate the size in pages that have been accessed. */ - if (young || page_is_young(page) || PageReferenced(page)) + if (young || folio_test_young(folio) || folio_test_referenced(folio)) mss->referenced += size; /* @@ -479,7 +481,7 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, * especially for migration entries. Treat regular migration entries * as mapcount == 1. */ - if ((page_count(page) == 1) || migration) { + if ((folio_ref_count(folio) == 1) || migration) { smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty, locked, true); return; From patchwork Fri Nov 3 07:29:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13444383 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4C427461 for ; Fri, 3 Nov 2023 07:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=none Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 649EAD43; Fri, 3 Nov 2023 00:29:12 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4SMC1W2j3lzMmLh; Fri, 3 Nov 2023 15:24:47 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 3 Nov 2023 15:29:09 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH 3/5] mm: task_mmu: use a folio in clear_refs_pte_range() Date: Fri, 3 Nov 2023 15:29:04 +0800 Message-ID: <20231103072906.2000381-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> References: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Use a folio to save two compound_head() calls in clear_refs_pte_range(). Signed-off-by: Kefeng Wang --- fs/proc/task_mmu.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5ec06fee1f14..869f6bb89230 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1161,7 +1161,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, struct vm_area_struct *vma = walk->vma; pte_t *pte, ptent; spinlock_t *ptl; - struct page *page; + struct folio *folio; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -1173,12 +1173,12 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, if (!pmd_present(*pmd)) goto out; - page = pmd_page(*pmd); + folio = page_folio(pmd_page(*pmd)); /* Clear accessed and referenced bits. */ pmdp_test_and_clear_young(vma, addr, pmd); - test_and_clear_page_young(page); - ClearPageReferenced(page); + folio_test_clear_young(folio); + folio_clear_referenced(folio); out: spin_unlock(ptl); return 0; @@ -1200,14 +1200,14 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, if (!pte_present(ptent)) continue; - page = vm_normal_page(vma, addr, ptent); - if (!page) + folio = vm_normal_folio(vma, addr, ptent); + if (!folio) continue; /* Clear accessed and referenced bits. */ ptep_test_and_clear_young(vma, addr, pte); - test_and_clear_page_young(page); - ClearPageReferenced(page); + folio_test_clear_young(folio); + folio_clear_referenced(folio); } pte_unmap_unlock(pte - 1, ptl); cond_resched(); From patchwork Fri Nov 3 07:29:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13444382 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86DA57E for ; Fri, 3 Nov 2023 07:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=none Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19C7C1A8; Fri, 3 Nov 2023 00:29:16 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SMC6W1mdSzvQJd; Fri, 3 Nov 2023 15:29:07 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 3 Nov 2023 15:29:10 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH 4/5] fs/proc/page: use a folio in stable_page_flags() Date: Fri, 3 Nov 2023 15:29:05 +0800 Message-ID: <20231103072906.2000381-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> References: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Replace ten compound_head() calls with one page_folio(). Signed-off-by: Kefeng Wang --- fs/proc/page.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index 195b077c0fac..94ab0ba13b16 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -109,6 +109,7 @@ static inline u64 kpf_copy_bit(u64 kflags, int ubit, int kbit) u64 stable_page_flags(struct page *page) { + struct folio *folio; u64 k; u64 u; @@ -119,6 +120,7 @@ u64 stable_page_flags(struct page *page) if (!page) return 1 << KPF_NOPAGE; + folio = page_folio(page); k = page->flags; u = 0; @@ -128,11 +130,11 @@ u64 stable_page_flags(struct page *page) * Note that page->_mapcount is overloaded in SLAB, so the * simple test in page_mapped() is not enough. */ - if (!PageSlab(page) && page_mapped(page)) + if (!folio_test_slab(folio) && folio_mapped(folio)) u |= 1 << KPF_MMAP; - if (PageAnon(page)) + if (folio_test_anon(folio)) u |= 1 << KPF_ANON; - if (PageKsm(page)) + if (folio_test_ksm(folio)) u |= 1 << KPF_KSM; /* @@ -152,11 +154,9 @@ u64 stable_page_flags(struct page *page) * to make sure a given page is a thp, not a non-huge compound page. */ else if (PageTransCompound(page)) { - struct page *head = compound_head(page); - - if (PageLRU(head) || PageAnon(head)) + if (folio_test_lru(folio) || folio_test_anon(folio)) u |= 1 << KPF_THP; - else if (is_huge_zero_page(head)) { + else if (is_huge_zero_page(&folio->page)) { u |= 1 << KPF_ZERO_PAGE; u |= 1 << KPF_THP; } @@ -170,7 +170,7 @@ u64 stable_page_flags(struct page *page) */ if (PageBuddy(page)) u |= 1 << KPF_BUDDY; - else if (page_count(page) == 0 && is_free_buddy_page(page)) + else if (folio_ref_count(folio) == 0 && is_free_buddy_page(page)) u |= 1 << KPF_BUDDY; if (PageOffline(page)) @@ -178,13 +178,13 @@ u64 stable_page_flags(struct page *page) if (PageTable(page)) u |= 1 << KPF_PGTABLE; - if (page_is_idle(page)) + if (folio_test_idle(folio)) u |= 1 << KPF_IDLE; u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); - if (PageTail(page) && PageSlab(page)) + if (PageTail(page) && folio_test_slab(folio)) u |= 1 << KPF_SLAB; u |= kpf_copy_bit(k, KPF_ERROR, PG_error); @@ -197,7 +197,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active); u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim); - if (PageSwapCache(page)) + if (folio_test_swapcache(folio)) u |= 1 << KPF_SWAPCACHE; u |= kpf_copy_bit(k, KPF_SWAPBACKED, PG_swapbacked); From patchwork Fri Nov 3 07:29:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13444384 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA2327468 for ; Fri, 3 Nov 2023 07:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=none Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEB6F1A6; Fri, 3 Nov 2023 00:29:13 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SMC6W4d5RzvPrQ; Fri, 3 Nov 2023 15:29:07 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 3 Nov 2023 15:29:10 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH 5/5] page_idle: kill page idle and young wrapper Date: Fri, 3 Nov 2023 15:29:06 +0800 Message-ID: <20231103072906.2000381-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> References: <20231103072906.2000381-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Since all the calls of page idle and young functions are gone, let's remove all the wrapper. Signed-off-by: Kefeng Wang --- include/linux/page_idle.h | 25 ------------------------- 1 file changed, 25 deletions(-) diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h index d8f344840643..1168d5f58ff2 100644 --- a/include/linux/page_idle.h +++ b/include/linux/page_idle.h @@ -119,29 +119,4 @@ static inline void folio_clear_idle(struct folio *folio) } #endif /* CONFIG_PAGE_IDLE_FLAG */ - -static inline bool page_is_young(struct page *page) -{ - return folio_test_young(page_folio(page)); -} - -static inline void set_page_young(struct page *page) -{ - folio_set_young(page_folio(page)); -} - -static inline bool test_and_clear_page_young(struct page *page) -{ - return folio_test_clear_young(page_folio(page)); -} - -static inline bool page_is_idle(struct page *page) -{ - return folio_test_idle(page_folio(page)); -} - -static inline void set_page_idle(struct page *page) -{ - folio_set_idle(page_folio(page)); -} #endif /* _LINUX_MM_PAGE_IDLE_H */