diff mbox series

[v4,8/9] mm/ksm: Convert chain series funcs and replace get_ksm_page

Message ID 20240409092826.1733637-9-alexs@kernel.org (mailing list archive)
State New
Headers show
Series transfer page to folio in KSM | expand

Commit Message

alexs@kernel.org April 9, 2024, 9:28 a.m. UTC
From: "Alex Shi (tencent)" <alexs@kernel.org>

In ksm stable tree all page are single, let's convert them to use and
folios as well as stable_tree_insert/stable_tree_search funcs.
And replace get_ksm_page() by ksm_get_folio() since there is no more
needs.

It could save a few compound_head calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 mm/ksm.c     | 150 +++++++++++++++++++++++++--------------------------
 mm/migrate.c |   2 +-
 2 files changed, 73 insertions(+), 79 deletions(-)

Comments

David Hildenbrand April 9, 2024, 11:02 a.m. UTC | #1
On 09.04.24 11:28, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
> 
> In ksm stable tree all page are single, let's convert them to use and
> folios as well as stable_tree_insert/stable_tree_search funcs.
> And replace get_ksm_page() by ksm_get_folio() since there is no more
> needs.
> 
> It could save a few compound_head calls.
> 
> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Chris Wright <chrisw@sous-sol.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>

I don't recall giving that yet :)

You could have kept some get_ksm_page()->ksm_get_folio() into a separate 
patch.

i.e., "[PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info" 
from your old series could have mostly stayed separately.

[...]

>   /*
> @@ -1829,7 +1821,7 @@ static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
>    * This function returns the stable tree node of identical content if found,
>    * NULL otherwise.
>    */
> -static struct page *stable_tree_search(struct page *page)
> +static void *stable_tree_search(struct page *page)

There is one caller of stable_tree_search() in cmp_and_merge_page().

Why the change from page* to void* ?

I suspect cmp_and_merge_page() could similarly be converted to some 
degree to let kpage be a folio (separate patch).
Alex Shi April 10, 2024, 3:47 a.m. UTC | #2
On 4/9/24 7:02 PM, David Hildenbrand wrote:
> On 09.04.24 11:28, alexs@kernel.org wrote:
>> From: "Alex Shi (tencent)" <alexs@kernel.org>
>>
>> In ksm stable tree all page are single, let's convert them to use and
>> folios as well as stable_tree_insert/stable_tree_search funcs.
>> And replace get_ksm_page() by ksm_get_folio() since there is no more
>> needs.
>>
>> It could save a few compound_head calls.
>>
>> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
>> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: Chris Wright <chrisw@sous-sol.org>
>> Reviewed-by: David Hildenbrand <david@redhat.com>
> 
> I don't recall giving that yet :)

Ops... 
Sorry for misunderstand!

> 
> You could have kept some get_ksm_page()->ksm_get_folio() into a separate patch.
> 
> i.e., "[PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info" from your old series could have mostly stayed separately.
> 

Yes, but the 12th and 11th patches are kind of depends each other, like after merge 8,9,10,12th with get_ksm_page replaced,

../mm/ksm.c:993:21: error: ‘get_ksm_page’ defined but not used [-Werror=unused-function]
  993 | static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
      |                     ^~~~~~~~~~~~

so we have to squash the 11th and 12th if we want to merge 12th with 8,9,10...
or we can do just merge the 8,9,10 and keep 11th, 12th as your first suggestion?

> [...]
> 
>>   /*
>> @@ -1829,7 +1821,7 @@ static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
>>    * This function returns the stable tree node of identical content if found,
>>    * NULL otherwise.
>>    */
>> -static struct page *stable_tree_search(struct page *page)
>> +static void *stable_tree_search(struct page *page)
> 
> There is one caller of stable_tree_search() in cmp_and_merge_page().
> 
> Why the change from page* to void* ?

Uh, a bit more changes needs if we want to remove void*. 

diff --git a/mm/ksm.c b/mm/ksm.c
index 0d703c3da9d8..cd414a9c33ad 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1815,7 +1815,7 @@ static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
  * This function returns the stable tree node of identical content if found,
  * NULL otherwise.
  */
-static void *stable_tree_search(struct page *page)
+static struct folio *stable_tree_search(struct page *page)
 {
        int nid;
        struct rb_root *root;
@@ -2308,6 +2308,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
        struct page *tree_page = NULL;
        struct ksm_stable_node *stable_node;
        struct page *kpage;
+       struct folio *folio;
        unsigned int checksum;
        int err;
        bool max_page_sharing_bypass = false;
@@ -2333,7 +2334,8 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
        }
 
        /* We first start with searching the page inside the stable tree */
-       kpage = stable_tree_search(page);
+       folio = stable_tree_search(page);
+       kpage = &folio->page;
        if (kpage == page && rmap_item->head == stable_node) {
                put_page(kpage);
                return;

> I suspect cmp_and_merge_page() could similarly be converted to some degree to let kpage be a folio (separate patch).
>

Yes, there are couple of changes needed for cmp_and_merge_page() and series try_to_merge_xx_pages(), I am going to change them on the next series patches. Guess 2 phases patches are better for a big/huge one, is this right?

Thanks
Alex
David Hildenbrand April 10, 2024, 8:47 a.m. UTC | #3
On 10.04.24 05:47, Alex Shi wrote:
> 
> 
> On 4/9/24 7:02 PM, David Hildenbrand wrote:
>> On 09.04.24 11:28, alexs@kernel.org wrote:
>>> From: "Alex Shi (tencent)" <alexs@kernel.org>
>>>
>>> In ksm stable tree all page are single, let's convert them to use and
>>> folios as well as stable_tree_insert/stable_tree_search funcs.
>>> And replace get_ksm_page() by ksm_get_folio() since there is no more
>>> needs.
>>>
>>> It could save a few compound_head calls.
>>>
>>> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
>>> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
>>> Cc: Matthew Wilcox <willy@infradead.org>
>>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: Chris Wright <chrisw@sous-sol.org>
>>> Reviewed-by: David Hildenbrand <david@redhat.com>
>>
>> I don't recall giving that yet :)
> 
> Ops...
> Sorry for misunderstand!

No worries :)

>>
>> You could have kept some get_ksm_page()->ksm_get_folio() into a separate patch.
>>
>> i.e., "[PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info" from your old series could have mostly stayed separately.
>>
> 
> Yes, but the 12th and 11th patches are kind of depends each other, like after merge 8,9,10,12th with get_ksm_page replaced,
> 
> ../mm/ksm.c:993:21: error: ‘get_ksm_page’ defined but not used [-Werror=unused-function]
>    993 | static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
>        |                     ^~~~~~~~~~~~
> 
> so we have to squash the 11th and 12th if we want to merge 12th with 8,9,10...
> or we can do just merge the 8,9,10 and keep 11th, 12th as your first suggestion?
> 

I see what you mean. Including removal is certainly required there, as you remove
the last user.

It might make sense to move some cleanups+comment adjustments from
"[PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info" into relevant patches.

After Patch #1 in this series, I would do

 From 38a6f6017bf91d9a8869316b711b594909caa5ed Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Wed, 10 Apr 2024 10:32:24 +0200
Subject: [PATCH] mm/ksm: rename get_ksm_page_flags() to ksm_get_folio_flags

As we are removing get_ksm_page_flags(), make the flags match the new
function name.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
  mm/ksm.c | 34 +++++++++++++++++-----------------
  1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index ac080235b002..fd2666e6bda1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -890,10 +890,10 @@ static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node)
  	free_stable_node(stable_node);
  }
  
-enum get_ksm_page_flags {
-	GET_KSM_PAGE_NOLOCK,
-	GET_KSM_PAGE_LOCK,
-	GET_KSM_PAGE_TRYLOCK
+enum ksm_get_folio_flags {
+	KSM_GET_FOLIO_NOLOCK,
+	KSM_GET_FOLIO_LOCK,
+	KSM_GET_FOLIO_TRYLOCK
  };
  
  /*
@@ -916,7 +916,7 @@ enum get_ksm_page_flags {
   * is on its way to being freed; but it is an anomaly to bear in mind.
   */
  static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
+				 enum ksm_get_folio_flags flags)
  {
  	struct folio *folio;
  	void *expected_mapping;
@@ -959,15 +959,15 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
  		goto stale;
  	}
  
-	if (flags == GET_KSM_PAGE_TRYLOCK) {
+	if (flags == KSM_GET_FOLIO_TRYLOCK) {
  		if (!folio_trylock(folio)) {
  			folio_put(folio);
  			return ERR_PTR(-EBUSY);
  		}
-	} else if (flags == GET_KSM_PAGE_LOCK)
+	} else if (flags == KSM_GET_FOLIO_LOCK)
  		folio_lock(folio);
  
-	if (flags != GET_KSM_PAGE_NOLOCK) {
+	if (flags != KSM_GET_FOLIO_NOLOCK) {
  		if (READ_ONCE(folio->mapping) != expected_mapping) {
  			folio_unlock(folio);
  			folio_put(folio);
@@ -991,7 +991,7 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
  }
  
  static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
+				 enum ksm_get_folio_flags flags)
  {
  	struct folio *folio = ksm_get_folio(stable_node, flags);
  
@@ -1009,7 +1009,7 @@ static void remove_rmap_item_from_tree(struct ksm_rmap_item *rmap_item)
  		struct page *page;
  
  		stable_node = rmap_item->head;
-		page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+		page = get_ksm_page(stable_node, KSM_GET_FOLIO_LOCK);
  		if (!page)
  			goto out;
  
@@ -1118,7 +1118,7 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
  	struct page *page;
  	int err;
  
-	page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+	page = get_ksm_page(stable_node, KSM_GET_FOLIO_LOCK);
  	if (!page) {
  		/*
  		 * get_ksm_page did remove_node_from_stable_tree itself.
@@ -1657,7 +1657,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
  		 * stable_node parameter itself will be freed from
  		 * under us if it returns NULL.
  		 */
-		_tree_page = get_ksm_page(dup, GET_KSM_PAGE_NOLOCK);
+		_tree_page = get_ksm_page(dup, KSM_GET_FOLIO_NOLOCK);
  		if (!_tree_page)
  			continue;
  		nr += 1;
@@ -1780,7 +1780,7 @@ static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_du
  	if (!is_stable_node_chain(stable_node)) {
  		if (is_page_sharing_candidate(stable_node)) {
  			*_stable_node_dup = stable_node;
-			return get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK);
+			return get_ksm_page(stable_node, KSM_GET_FOLIO_NOLOCK);
  		}
  		/*
  		 * _stable_node_dup set to NULL means the stable_node
@@ -1886,7 +1886,7 @@ static struct page *stable_tree_search(struct page *page)
  			 * fine to continue the walk.
  			 */
  			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+						 KSM_GET_FOLIO_NOLOCK);
  		}
  		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
  		if (!tree_page) {
@@ -1947,7 +1947,7 @@ static struct page *stable_tree_search(struct page *page)
  			 * than kpage, but that involves more changes.
  			 */
  			tree_page = get_ksm_page(stable_node_dup,
-						 GET_KSM_PAGE_TRYLOCK);
+						 KSM_GET_FOLIO_TRYLOCK);
  
  			if (PTR_ERR(tree_page) == -EBUSY)
  				return ERR_PTR(-EBUSY);
@@ -2119,7 +2119,7 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
  			 * fine to continue the walk.
  			 */
  			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+						 KSM_GET_FOLIO_NOLOCK);
  		}
  		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
  		if (!tree_page) {
@@ -2610,7 +2610,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
  			list_for_each_entry_safe(stable_node, next,
  						 &migrate_nodes, list) {
  				page = get_ksm_page(stable_node,
-						    GET_KSM_PAGE_NOLOCK);
+						    KSM_GET_FOLIO_NOLOCK);
  				if (page)
  					put_page(page);
  				cond_resched();
diff mbox series

Patch

diff --git a/mm/ksm.c b/mm/ksm.c
index 776094fab34e..a705ed16aa1b 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -890,14 +890,14 @@  static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node)
 	free_stable_node(stable_node);
 }
 
-enum get_ksm_page_flags {
+enum ksm_get_folio_flags {
 	GET_KSM_PAGE_NOLOCK,
 	GET_KSM_PAGE_LOCK,
 	GET_KSM_PAGE_TRYLOCK
 };
 
 /*
- * get_ksm_page: checks if the page indicated by the stable node
+ * ksm_get_folio: checks if the page indicated by the stable node
  * is still its ksm page, despite having held no reference to it.
  * In which case we can trust the content of the page, and it
  * returns the gotten page; but if the page has now been zapped,
@@ -916,7 +916,7 @@  enum get_ksm_page_flags {
  * is on its way to being freed; but it is an anomaly to bear in mind.
  */
 static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
+				 enum ksm_get_folio_flags flags)
 {
 	struct folio *folio;
 	void *expected_mapping;
@@ -990,14 +990,6 @@  static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
 	return NULL;
 }
 
-static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
-{
-	struct folio *folio = ksm_get_folio(stable_node, flags);
-
-	return &folio->page;
-}
-
 /*
  * Removing rmap_item from stable or unstable tree.
  * This function will clean the information from the stable/unstable tree.
@@ -1127,7 +1119,7 @@  static int remove_stable_node(struct ksm_stable_node *stable_node)
 	folio = ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK);
 	if (!folio) {
 		/*
-		 * get_ksm_page did remove_node_from_stable_tree itself.
+		 * ksm_get_folio did remove_node_from_stable_tree itself.
 		 */
 		return 0;
 	}
@@ -1140,7 +1132,7 @@  static int remove_stable_node(struct ksm_stable_node *stable_node)
 	err = -EBUSY;
 	if (!folio_mapped(folio)) {
 		/*
-		 * The stable node did not yet appear stale to get_ksm_page(),
+		 * The stable node did not yet appear stale to ksm_get_folio(),
 		 * since that allows for an unmapped ksm folio to be recognized
 		 * right up until it is freed; but the node is safe to remove.
 		 * This folio might be in an LRU cache waiting to be freed,
@@ -1632,10 +1624,10 @@  bool is_page_sharing_candidate(struct ksm_stable_node *stable_node)
 	return __is_page_sharing_candidate(stable_node, 0);
 }
 
-static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
-				    struct ksm_stable_node **_stable_node,
-				    struct rb_root *root,
-				    bool prune_stale_stable_nodes)
+static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
+				     struct ksm_stable_node **_stable_node,
+				     struct rb_root *root,
+				     bool prune_stale_stable_nodes)
 {
 	struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
 	struct hlist_node *hlist_safe;
@@ -1658,7 +1650,7 @@  static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 		 * We must walk all stable_node_dup to prune the stale
 		 * stable nodes during lookup.
 		 *
-		 * get_ksm_page can drop the nodes from the
+		 * ksm_get_folio can drop the nodes from the
 		 * stable_node->hlist if they point to freed pages
 		 * (that's why we do a _safe walk). The "dup"
 		 * stable_node parameter itself will be freed from
@@ -1748,7 +1740,7 @@  static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 	}
 
 	*_stable_node_dup = found;
-	return &tree_folio->page;
+	return tree_folio;
 }
 
 static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
@@ -1765,7 +1757,7 @@  static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
 }
 
 /*
- * Like for get_ksm_page, this function can free the *_stable_node and
+ * Like for ksm_get_folio, this function can free the *_stable_node and
  * *_stable_node_dup if the returned tree_page is NULL.
  *
  * It can also free and overwrite *_stable_node with the found
@@ -1778,16 +1770,16 @@  static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
  * function and will be overwritten in all cases, the caller doesn't
  * need to initialize it.
  */
-static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
-					struct ksm_stable_node **_stable_node,
-					struct rb_root *root,
-					bool prune_stale_stable_nodes)
+static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
+					 struct ksm_stable_node **_stable_node,
+					 struct rb_root *root,
+					 bool prune_stale_stable_nodes)
 {
 	struct ksm_stable_node *stable_node = *_stable_node;
 	if (!is_stable_node_chain(stable_node)) {
 		if (is_page_sharing_candidate(stable_node)) {
 			*_stable_node_dup = stable_node;
-			return get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK);
+			return ksm_get_folio(stable_node, GET_KSM_PAGE_NOLOCK);
 		}
 		/*
 		 * _stable_node_dup set to NULL means the stable_node
@@ -1800,24 +1792,24 @@  static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_du
 			       prune_stale_stable_nodes);
 }
 
-static __always_inline struct page *chain_prune(struct ksm_stable_node **s_n_d,
-						struct ksm_stable_node **s_n,
-						struct rb_root *root)
+static __always_inline struct folio *chain_prune(struct ksm_stable_node **s_n_d,
+						 struct ksm_stable_node **s_n,
+						 struct rb_root *root)
 {
 	return __stable_node_chain(s_n_d, s_n, root, true);
 }
 
-static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
-					  struct ksm_stable_node *s_n,
-					  struct rb_root *root)
+static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
+					   struct ksm_stable_node *s_n,
+					   struct rb_root *root)
 {
 	struct ksm_stable_node *old_stable_node = s_n;
-	struct page *tree_page;
+	struct folio *tree_folio;
 
-	tree_page = __stable_node_chain(s_n_d, &s_n, root, false);
+	tree_folio = __stable_node_chain(s_n_d, &s_n, root, false);
 	/* not pruning dups so s_n cannot have changed */
 	VM_BUG_ON(s_n != old_stable_node);
-	return tree_page;
+	return tree_folio;
 }
 
 /*
@@ -1829,7 +1821,7 @@  static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
  * This function returns the stable tree node of identical content if found,
  * NULL otherwise.
  */
-static struct page *stable_tree_search(struct page *page)
+static void *stable_tree_search(struct page *page)
 {
 	int nid;
 	struct rb_root *root;
@@ -1837,28 +1829,30 @@  static struct page *stable_tree_search(struct page *page)
 	struct rb_node *parent;
 	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
 	struct ksm_stable_node *page_node;
+	struct folio *folio;
 
-	page_node = page_stable_node(page);
+	folio = page_folio(page);
+	page_node = folio_stable_node(folio);
 	if (page_node && page_node->head != &migrate_nodes) {
 		/* ksm page forked */
-		get_page(page);
-		return page;
+		folio_get(folio);
+		return folio;
 	}
 
-	nid = get_kpfn_nid(page_to_pfn(page));
+	nid = get_kpfn_nid(folio_pfn(folio));
 	root = root_stable_tree + nid;
 again:
 	new = &root->rb_node;
 	parent = NULL;
 
 	while (*new) {
-		struct page *tree_page;
+		struct folio *tree_folio;
 		int ret;
 
 		cond_resched();
 		stable_node = rb_entry(*new, struct ksm_stable_node, node);
 		stable_node_any = NULL;
-		tree_page = chain_prune(&stable_node_dup, &stable_node,	root);
+		tree_folio = chain_prune(&stable_node_dup, &stable_node, root);
 		/*
 		 * NOTE: stable_node may have been freed by
 		 * chain_prune() if the returned stable_node_dup is
@@ -1892,14 +1886,14 @@  static struct page *stable_tree_search(struct page *page)
 			 * write protected at all times. Any will work
 			 * fine to continue the walk.
 			 */
-			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+			tree_folio = ksm_get_folio(stable_node_any,
+						   GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
-		if (!tree_page) {
+		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
-			 * get_ksm_page() will call rb_erase() and it
+			 * ksm_get_folio() will call rb_erase() and it
 			 * may rebalance the tree from under us. So
 			 * restart the search from scratch. Returning
 			 * NULL would be safe too, but we'd generate
@@ -1909,8 +1903,8 @@  static struct page *stable_tree_search(struct page *page)
 			goto again;
 		}
 
-		ret = memcmp_pages(page, tree_page);
-		put_page(tree_page);
+		ret = memcmp_pages(page, &tree_folio->page);
+		folio_put(tree_folio);
 
 		parent = *new;
 		if (ret < 0)
@@ -1953,26 +1947,26 @@  static struct page *stable_tree_search(struct page *page)
 			 * It would be more elegant to return stable_node
 			 * than kpage, but that involves more changes.
 			 */
-			tree_page = get_ksm_page(stable_node_dup,
-						 GET_KSM_PAGE_TRYLOCK);
+			tree_folio = ksm_get_folio(stable_node_dup,
+						   GET_KSM_PAGE_TRYLOCK);
 
-			if (PTR_ERR(tree_page) == -EBUSY)
+			if (PTR_ERR(tree_folio) == -EBUSY)
 				return ERR_PTR(-EBUSY);
 
-			if (unlikely(!tree_page))
+			if (unlikely(!tree_folio))
 				/*
 				 * The tree may have been rebalanced,
 				 * so re-evaluate parent and new.
 				 */
 				goto again;
-			unlock_page(tree_page);
+			folio_unlock(tree_folio);
 
 			if (get_kpfn_nid(stable_node_dup->kpfn) !=
 			    NUMA(stable_node_dup->nid)) {
-				put_page(tree_page);
+				folio_put(tree_folio);
 				goto replace;
 			}
-			return tree_page;
+			return tree_folio;
 		}
 	}
 
@@ -1985,8 +1979,8 @@  static struct page *stable_tree_search(struct page *page)
 	rb_insert_color(&page_node->node, root);
 out:
 	if (is_page_sharing_candidate(page_node)) {
-		get_page(page);
-		return page;
+		folio_get(folio);
+		return folio;
 	} else
 		return NULL;
 
@@ -2011,12 +2005,12 @@  static struct page *stable_tree_search(struct page *page)
 					&page_node->node,
 					root);
 			if (is_page_sharing_candidate(page_node))
-				get_page(page);
+				folio_get(folio);
 			else
-				page = NULL;
+				folio = NULL;
 		} else {
 			rb_erase(&stable_node_dup->node, root);
-			page = NULL;
+			folio = NULL;
 		}
 	} else {
 		VM_BUG_ON(!is_stable_node_chain(stable_node));
@@ -2027,16 +2021,16 @@  static struct page *stable_tree_search(struct page *page)
 			DO_NUMA(page_node->nid = nid);
 			stable_node_chain_add_dup(page_node, stable_node);
 			if (is_page_sharing_candidate(page_node))
-				get_page(page);
+				folio_get(folio);
 			else
-				page = NULL;
+				folio = NULL;
 		} else {
-			page = NULL;
+			folio = NULL;
 		}
 	}
 	stable_node_dup->head = &migrate_nodes;
 	list_add(&stable_node_dup->list, stable_node_dup->head);
-	return page;
+	return folio;
 
 chain_append:
 	/* stable_node_dup could be null if it reached the limit */
@@ -2079,7 +2073,7 @@  static struct page *stable_tree_search(struct page *page)
  * This function returns the stable tree node just allocated on success,
  * NULL otherwise.
  */
-static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
+static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 {
 	int nid;
 	unsigned long kpfn;
@@ -2089,7 +2083,7 @@  static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
 	bool need_chain = false;
 
-	kpfn = page_to_pfn(kpage);
+	kpfn = folio_pfn(kfolio);
 	nid = get_kpfn_nid(kpfn);
 	root = root_stable_tree + nid;
 again:
@@ -2097,13 +2091,13 @@  static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 	new = &root->rb_node;
 
 	while (*new) {
-		struct page *tree_page;
+		struct folio *tree_folio;
 		int ret;
 
 		cond_resched();
 		stable_node = rb_entry(*new, struct ksm_stable_node, node);
 		stable_node_any = NULL;
-		tree_page = chain(&stable_node_dup, stable_node, root);
+		tree_folio = chain(&stable_node_dup, stable_node, root);
 		if (!stable_node_dup) {
 			/*
 			 * Either all stable_node dups were full in
@@ -2125,14 +2119,14 @@  static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 			 * write protected at all times. Any will work
 			 * fine to continue the walk.
 			 */
-			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+			tree_folio = ksm_get_folio(stable_node_any,
+						   GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
-		if (!tree_page) {
+		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
-			 * get_ksm_page() will call rb_erase() and it
+			 * ksm_get_folio() will call rb_erase() and it
 			 * may rebalance the tree from under us. So
 			 * restart the search from scratch. Returning
 			 * NULL would be safe too, but we'd generate
@@ -2142,8 +2136,8 @@  static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 			goto again;
 		}
 
-		ret = memcmp_pages(kpage, tree_page);
-		put_page(tree_page);
+		ret = memcmp_pages(&kfolio->page, &tree_folio->page);
+		folio_put(tree_folio);
 
 		parent = *new;
 		if (ret < 0)
@@ -2162,7 +2156,7 @@  static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 
 	INIT_HLIST_HEAD(&stable_node_dup->hlist);
 	stable_node_dup->kpfn = kpfn;
-	set_page_stable_node(kpage, stable_node_dup);
+	folio_set_stable_node(kfolio, stable_node_dup);
 	stable_node_dup->rmap_hlist_len = 0;
 	DO_NUMA(stable_node_dup->nid = nid);
 	if (!need_chain) {
@@ -2440,7 +2434,7 @@  static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 			 * node in the stable tree and add both rmap_items.
 			 */
 			lock_page(kpage);
-			stable_node = stable_tree_insert(kpage);
+			stable_node = stable_tree_insert(page_folio(kpage));
 			if (stable_node) {
 				stable_tree_append(tree_rmap_item, stable_node,
 						   false);
@@ -3244,7 +3238,7 @@  void folio_migrate_ksm(struct folio *newfolio, struct folio *folio)
 		/*
 		 * newfolio->mapping was set in advance; now we need smp_wmb()
 		 * to make sure that the new stable_node->kpfn is visible
-		 * to get_ksm_page() before it can see that folio->mapping
+		 * to ksm_get_folio() before it can see that folio->mapping
 		 * has gone stale (or that folio_test_swapcache has been cleared).
 		 */
 		smp_wmb();
@@ -3271,7 +3265,7 @@  static bool stable_node_dup_remove_range(struct ksm_stable_node *stable_node,
 	if (stable_node->kpfn >= start_pfn &&
 	    stable_node->kpfn < end_pfn) {
 		/*
-		 * Don't get_ksm_page, page has already gone:
+		 * Don't ksm_get_folio, page has already gone:
 		 * which is why we keep kpfn instead of page*
 		 */
 		remove_node_from_stable_tree(stable_node);
@@ -3359,7 +3353,7 @@  static int ksm_memory_callback(struct notifier_block *self,
 		 * Most of the work is done by page migration; but there might
 		 * be a few stable_nodes left over, still pointing to struct
 		 * pages which have been offlined: prune those from the tree,
-		 * otherwise get_ksm_page() might later try to access a
+		 * otherwise ksm_get_folio() might later try to access a
 		 * non-existent struct page.
 		 */
 		ksm_check_stable_tree(mn->start_pfn,
diff --git a/mm/migrate.c b/mm/migrate.c
index 73a052a382f1..9f0494fd902c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -616,7 +616,7 @@  void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 	folio_migrate_ksm(newfolio, folio);
 	/*
 	 * Please do not reorder this without considering how mm/ksm.c's
-	 * get_ksm_page() depends upon ksm_migrate_page() and PageSwapCache().
+	 * ksm_get_folio() depends upon ksm_migrate_page() and PageSwapCache().
 	 */
 	if (folio_test_swapcache(folio))
 		folio_clear_swapcache(folio);