diff mbox series

[v1,6/7] mm/ksm: convert break_ksm() to use walk_page_range_vma()

Message ID 20220930141931.174362-7-david@redhat.com (mailing list archive)
State New
Headers show
Series mm/ksm: break_ksm() cleanups and fixes | expand

Commit Message

David Hildenbrand Sept. 30, 2022, 2:19 p.m. UTC
FOLL_MIGRATION exists only for the purpose of break_ksm(), and
actually, there is not even the need to wait for the migration to
finish, we only want to know if we're dealing with a KSM page.

Using follow_page() just to identify a KSM page overcomplicates GUP
code. Let's use walk_page_range_vma() instead, because we don't actually
care about the page itself, we only need to know a single property --
no need to even grab a reference on the page.

In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge
performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in
a performance degradation of ~4% (old: ~5010 MiB/s, new: ~4800 MiB/s).
I don't think we particularly care for now.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/ksm.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 62 insertions(+), 8 deletions(-)

Comments

Peter Xu Oct. 5, 2022, 9 p.m. UTC | #1
On Fri, Sep 30, 2022 at 04:19:30PM +0200, David Hildenbrand wrote:
> FOLL_MIGRATION exists only for the purpose of break_ksm(), and
> actually, there is not even the need to wait for the migration to
> finish, we only want to know if we're dealing with a KSM page.
> 
> Using follow_page() just to identify a KSM page overcomplicates GUP
> code. Let's use walk_page_range_vma() instead, because we don't actually
> care about the page itself, we only need to know a single property --
> no need to even grab a reference on the page.
> 
> In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge
> performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in
> a performance degradation of ~4% (old: ~5010 MiB/s, new: ~4800 MiB/s).
> I don't think we particularly care for now.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

[...]

> +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
> +			struct mm_walk *walk)
> +{
> +	/* We only care about page tables to walk to a single base page. */
> +	if (pud_leaf(*pud) || !pud_present(*pud))
> +		return 1;
> +	return 0;
> +}

Is this needed?  I thought the pgtable walker handlers this already.

[...]

>  static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
>  {
> -	struct page *page;
>  	vm_fault_t ret = 0;
>  
> +	if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)))
> +		return -EINVAL;
> +
>  	do {
>  		bool ksm_page = false;
>  
>  		cond_resched();
> -		page = follow_page(vma, addr,
> -				FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
> -		if (IS_ERR_OR_NULL(page))
> -			break;
> -		if (PageKsm(page))
> -			ksm_page = true;
> -		put_page(page);
> +		ret = walk_page_range_vma(vma, addr, addr + PAGE_SIZE,
> +					  &break_ksm_ops, &ksm_page);
> +		if (WARN_ON_ONCE(ret < 0))
> +			return ret;

I'm not sure this would be worth it, especially with a 4% degrade.  The
next patch will be able to bring 50- LOC, but this patch does 60+ anyway,
based on another new helper just introduced...

I just don't see whether there's strong enough reason to do so to drop
FOLL_MIGRATE.  It's different to the previous VM_FAULT_WRITE refactor
because of the unshare approach was much of a good reasoning to me.

Perhaps I missed something?
David Hildenbrand Oct. 6, 2022, 9:20 a.m. UTC | #2
>> +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
>> +			struct mm_walk *walk)
>> +{
>> +	/* We only care about page tables to walk to a single base page. */
>> +	if (pud_leaf(*pud) || !pud_present(*pud))
>> +		return 1;
>> +	return 0;
>> +}
> 
> Is this needed?  I thought the pgtable walker handlers this already.
> 
> [...]
> 

Most probably yes. I was trying to avoid about PUD splits, but I guess 
we simply should not care in VMAs that are considered by KSM (MERGABLE). 
Most probably never ever happens.

>>   static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
>>   {
>> -	struct page *page;
>>   	vm_fault_t ret = 0;
>>   
>> +	if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)))
>> +		return -EINVAL;
>> +
>>   	do {
>>   		bool ksm_page = false;
>>   
>>   		cond_resched();
>> -		page = follow_page(vma, addr,
>> -				FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
>> -		if (IS_ERR_OR_NULL(page))
>> -			break;
>> -		if (PageKsm(page))
>> -			ksm_page = true;
>> -		put_page(page);
>> +		ret = walk_page_range_vma(vma, addr, addr + PAGE_SIZE,
>> +					  &break_ksm_ops, &ksm_page);
>> +		if (WARN_ON_ONCE(ret < 0))
>> +			return ret;
> 
> I'm not sure this would be worth it, especially with a 4% degrade.  The
> next patch will be able to bring 50- LOC, but this patch does 60+ anyway,
> based on another new helper just introduced...
> 
> I just don't see whether there's strong enough reason to do so to drop
> FOLL_MIGRATE.  It's different to the previous VM_FAULT_WRITE refactor
> because of the unshare approach was much of a good reasoning to me.
> 
> Perhaps I missed something?

My main motivation is to remove most of that GUP hackery here, which is
1) Getting a reference on a page and waiting for migration to finish
    even though both is unnecessary.
2) As we don't have sufficient control, we added FOLL_MIGRATION hacks to
    MM core to work around limitations in the GUP-based approacj.
3) We rely on legacy follow_page() interface that we should really get
    rid of in the long term.

All we want to do is walk the page tables and make a decision if 
something we care about is mapped. Instead of leaking these details via 
hacks into GUP code and making that code harder to grasp/maintain, this 
patch moves that logic to the actual user, while reusing generic page 
walking code.

Yes, we have to extend page walking code, but it's just the natural, 
non-hacky way of doing it.

Regarding the 4% performance degradation (if I wouldn't have added the 
benchmarks, nobody would know and probably care ;) ), I am not quite 
sure why that is the case. We're just walking page tables after all in 
both cases. Maybe the callback-based implementation of pagewalk code is 
less efficient, but we might be able to improve that implementation if 
we really care about performance here. Maybe removing 
break_ksm_pud_entry() already improves the numbers slightly.

Thanks!
Peter Xu Oct. 6, 2022, 7:28 p.m. UTC | #3
On Thu, Oct 06, 2022 at 11:20:42AM +0200, David Hildenbrand wrote:
> > > +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
> > > +			struct mm_walk *walk)
> > > +{
> > > +	/* We only care about page tables to walk to a single base page. */
> > > +	if (pud_leaf(*pud) || !pud_present(*pud))
> > > +		return 1;
> > > +	return 0;
> > > +}
> > 
> > Is this needed?  I thought the pgtable walker handlers this already.
> > 
> > [...]
> > 
> 
> Most probably yes. I was trying to avoid about PUD splits, but I guess we
> simply should not care in VMAs that are considered by KSM (MERGABLE). Most
> probably never ever happens.

I was surprised the split is the default approach; didn't really notice
that before. Yeah maybe better to keep it.

> 
> > >   static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
> > >   {
> > > -	struct page *page;
> > >   	vm_fault_t ret = 0;
> > > +	if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)))
> > > +		return -EINVAL;
> > > +
> > >   	do {
> > >   		bool ksm_page = false;
> > >   		cond_resched();
> > > -		page = follow_page(vma, addr,
> > > -				FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
> > > -		if (IS_ERR_OR_NULL(page))
> > > -			break;
> > > -		if (PageKsm(page))
> > > -			ksm_page = true;
> > > -		put_page(page);
> > > +		ret = walk_page_range_vma(vma, addr, addr + PAGE_SIZE,
> > > +					  &break_ksm_ops, &ksm_page);
> > > +		if (WARN_ON_ONCE(ret < 0))
> > > +			return ret;
> > 
> > I'm not sure this would be worth it, especially with a 4% degrade.  The
> > next patch will be able to bring 50- LOC, but this patch does 60+ anyway,
> > based on another new helper just introduced...
> > 
> > I just don't see whether there's strong enough reason to do so to drop
> > FOLL_MIGRATE.  It's different to the previous VM_FAULT_WRITE refactor
> > because of the unshare approach was much of a good reasoning to me.
> > 
> > Perhaps I missed something?
> 
> My main motivation is to remove most of that GUP hackery here, which is
> 1) Getting a reference on a page and waiting for migration to finish
>    even though both is unnecessary.
> 2) As we don't have sufficient control, we added FOLL_MIGRATION hacks to
>    MM core to work around limitations in the GUP-based approacj.

I saw one thing of adding FOLL_MIGRATION from Hugh was to have a hint for
follow page users:

  I'd have preferred to avoid another flag, and do it every time, in case
  someone else makes the same easy mistake..

Though..

> 3) We rely on legacy follow_page() interface that we should really get
>    rid of in the long term.

..this is part of effort to remove follow_page()?  More context will be
helpful in that case.

> 
> All we want to do is walk the page tables and make a decision if something
> we care about is mapped. Instead of leaking these details via hacks into GUP
> code and making that code harder to grasp/maintain, this patch moves that
> logic to the actual user, while reusing generic page walking code.

Indeed there's only one ksm user, at least proving that the flag was not
widely used.

> 
> Yes, we have to extend page walking code, but it's just the natural,
> non-hacky way of doing it.
> 
> Regarding the 4% performance degradation (if I wouldn't have added the
> benchmarks, nobody would know and probably care ;) ), I am not quite sure
> why that is the case. We're just walking page tables after all in both
> cases. Maybe the callback-based implementation of pagewalk code is less
> efficient, but we might be able to improve that implementation if we really
> care about performance here. Maybe removing break_ksm_pud_entry() already
> improves the numbers slightly.

Yeah it could be the walker is just slower.  And for !ksm walking your code
should be faster when hit migration entries, but that should really be rare
anyway.
David Hildenbrand Oct. 20, 2022, 8:59 a.m. UTC | #4
On 30.09.22 16:19, David Hildenbrand wrote:
> FOLL_MIGRATION exists only for the purpose of break_ksm(), and
> actually, there is not even the need to wait for the migration to
> finish, we only want to know if we're dealing with a KSM page.
> 
> Using follow_page() just to identify a KSM page overcomplicates GUP
> code. Let's use walk_page_range_vma() instead, because we don't actually
> care about the page itself, we only need to know a single property --
> no need to even grab a reference on the page.
> 
> In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge
> performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in
> a performance degradation of ~4% (old: ~5010 MiB/s, new: ~4800 MiB/s).
> I don't think we particularly care for now.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   mm/ksm.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++-------
>   1 file changed, 62 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 4d7bcf7da7c3..814c1a37c323 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -39,6 +39,7 @@
>   #include <linux/freezer.h>
>   #include <linux/oom.h>
>   #include <linux/numa.h>
> +#include <linux/pagewalk.h>
>   
>   #include <asm/tlbflush.h>
>   #include "internal.h"
> @@ -452,6 +453,60 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
>   	return atomic_read(&mm->mm_users) == 0;
>   }
>   
> +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
> +			struct mm_walk *walk)
> +{
> +	/* We only care about page tables to walk to a single base page. */
> +	if (pud_leaf(*pud) || !pud_present(*pud))
> +		return 1;
> +	return 0;
> +}
> +
> +int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
> +			struct mm_walk *walk)
> +{
> +	bool *ksm_page = walk->private;
> +	struct page *page = NULL;
> +	pte_t *pte, ptent;
> +	spinlock_t *ptl;
> +
> +	/* We only care about page tables to walk to a single base page. */
> +	if (pmd_leaf(*pmd) || !pmd_present(*pmd))
> +		return 1;
> +
> +	/*
> +	 * We only lookup a single page (a) no need to iterate; and (b)
> +	 * always return 1 to exit immediately and not iterate in the caller.
> +	 */
> +	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> +	ptent = *pte;
> +
> +	if (pte_none(ptent))
> +		return 1;

As reported by Janosch, we fail to drop the lock here.


t480s: ~/git/linux ksm_unshare $ git diff
diff --git a/mm/ksm.c b/mm/ksm.c
index 26aec41b127c..94f8e114c89f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -435,7 +435,7 @@ int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
         pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
  
         if (pte_none(*pte))
-               return 1;
+               goto out_unlock;
         if (!pte_present(*pte)) {
                 swp_entry_t entry = pte_to_swp_entry(*pte);
  
@@ -451,6 +451,7 @@ int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
         }
         if (page && PageKsm(page))
                 *ksm_page = true;
+out_unlock:
         pte_unmap_unlock(pte, ptl);
         return 1;
  }
David Hildenbrand Oct. 21, 2022, 9:11 a.m. UTC | #5
On 06.10.22 21:28, Peter Xu wrote:
> On Thu, Oct 06, 2022 at 11:20:42AM +0200, David Hildenbrand wrote:
>>>> +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
>>>> +			struct mm_walk *walk)
>>>> +{
>>>> +	/* We only care about page tables to walk to a single base page. */
>>>> +	if (pud_leaf(*pud) || !pud_present(*pud))
>>>> +		return 1;
>>>> +	return 0;
>>>> +}
>>>
>>> Is this needed?  I thought the pgtable walker handlers this already.
>>>
>>> [...]
>>>
>>
>> Most probably yes. I was trying to avoid about PUD splits, but I guess we
>> simply should not care in VMAs that are considered by KSM (MERGABLE). Most
>> probably never ever happens.
> 
> I was surprised the split is the default approach; didn't really notice
> that before. Yeah maybe better to keep it.

Interestingly, one callback reduces the benchmark result by 100-200 MiB.

With only break_ksm_pmd_entry(), I get ~4900 MiB/s instead of ~5010 MiB/s (~2%).

I came to the conclusion that we really shouldn't have to worry about pud
THPs here: it could only be a file PUD and splitting that only zaps the
entry, but doesn't PMD- or PTE-map it.

Also, I think we will barely see large pud THP in a mergable mapping ... :)

[...]

>> My main motivation is to remove most of that GUP hackery here, which is
>> 1) Getting a reference on a page and waiting for migration to finish
>>     even though both is unnecessary.
>> 2) As we don't have sufficient control, we added FOLL_MIGRATION hacks to
>>     MM core to work around limitations in the GUP-based approacj.
> 
> I saw one thing of adding FOLL_MIGRATION from Hugh was to have a hint for
> follow page users:
> 
>    I'd have preferred to avoid another flag, and do it every time, in case
>    someone else makes the same easy mistake..
> 
> Though..

The important thing I think is that FOLL_MIGRATION really only applies to
follow_page(). In case of "modern" GUP we will just wait for migration
entries, handle swap entries ... when triggering a page fault.

> 
>> 3) We rely on legacy follow_page() interface that we should really get
>>     rid of in the long term.
> 
> ..this is part of effort to remove follow_page()?  More context will be
> helpful in that case.

The comment from Hugh is another example why follow_page() adds complexity.
One might wonder, how pages in the swapcache, device coherent pages, ...
would have to be handled.

Short-term, I want to cleanup GUP. Long-term we might want to consider
removing follow_page() completely.

[...]

>>
>> Yes, we have to extend page walking code, but it's just the natural,
>> non-hacky way of doing it.
>>
>> Regarding the 4% performance degradation (if I wouldn't have added the
>> benchmarks, nobody would know and probably care ;) ), I am not quite sure
>> why that is the case. We're just walking page tables after all in both
>> cases. Maybe the callback-based implementation of pagewalk code is less
>> efficient, but we might be able to improve that implementation if we really
>> care about performance here. Maybe removing break_ksm_pud_entry() already
>> improves the numbers slightly.
> 
> Yeah it could be the walker is just slower.  And for !ksm walking your code
> should be faster when hit migration entries, but that should really be rare
> anyway.


I have the following right now:


 From 7f767f9e9e673a29793cd35f1c3d66ed593b67cb Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Mon, 25 Jul 2022 10:36:20 +0200
Subject: [PATCH] mm/ksm: convert break_ksm() to use walk_page_range_vma()

FOLL_MIGRATION exists only for the purpose of break_ksm(), and
actually, there is not even the need to wait for the migration to
finish, we only want to know if we're dealing with a KSM page.

Using follow_page() just to identify a KSM page overcomplicates GUP
code. Let's use walk_page_range_vma() instead, because we don't actually
care about the page itself, we only need to know a single property --
no need to even grab a reference.

So, get rid of follow_page() usage such that we can get rid of
FOLL_MIGRATION now and eventually be able to get rid of follow_page() in
the future.

In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge
performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in
a performance degradation of ~2% (old: ~5010 MiB/s, new: ~4900 MiB/s).
I don't think we particularly care for now.

Interestingly, the benchmark reduction is due to the single callback.
Adding a second callback (e.g., pud_entry()) reduces the benchmark by
another 100-200 MiB/s.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
  mm/ksm.c | 49 +++++++++++++++++++++++++++++++++++++++----------
  1 file changed, 39 insertions(+), 10 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index c6f58aa6e731..5cdb852ff132 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -39,6 +39,7 @@
  #include <linux/freezer.h>
  #include <linux/oom.h>
  #include <linux/numa.h>
+#include <linux/pagewalk.h>
  
  #include <asm/tlbflush.h>
  #include "internal.h"
@@ -419,6 +420,39 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
  	return atomic_read(&mm->mm_users) == 0;
  }
  
+static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
+			struct mm_walk *walk)
+{
+	struct page *page = NULL;
+	spinlock_t *ptl;
+	pte_t *pte;
+	int ret;
+
+	if (pmd_leaf(*pmd) || !pmd_present(*pmd))
+		return 0;
+
+	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+	if (pte_present(*pte)) {
+		page = vm_normal_page(walk->vma, addr, *pte);
+	} else if (!pte_none(*pte)) {
+		swp_entry_t entry = pte_to_swp_entry(*pte);
+
+		/*
+		 * As KSM pages remain KSM pages until freed, no need to wait
+		 * here for migration to end.
+		 */
+		if (is_migration_entry(entry))
+			page = pfn_swap_entry_to_page(entry);
+	}
+	ret = page && PageKsm(page);
+	pte_unmap_unlock(pte, ptl);
+	return ret;
+}
+
+static const struct mm_walk_ops break_ksm_ops = {
+	.pmd_entry = break_ksm_pmd_entry,
+};
+
  /*
   * We use break_ksm to break COW on a ksm page by triggering unsharing,
   * such that the ksm page will get replaced by an exclusive anonymous page.
@@ -434,21 +468,16 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
   */
  static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
  {
-	struct page *page;
  	vm_fault_t ret = 0;
  
  	do {
-		bool ksm_page = false;
+		int ksm_page;
  
  		cond_resched();
-		page = follow_page(vma, addr,
-				FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
-		if (IS_ERR_OR_NULL(page))
-			break;
-		if (PageKsm(page))
-			ksm_page = true;
-		put_page(page);
-
+		ksm_page = walk_page_range_vma(vma, addr, addr + 1,
+					       &break_ksm_ops, NULL);
+		if (WARN_ON_ONCE(ksm_page < 0))
+			return ksm_page;
  		if (!ksm_page)
  			return 0;
  		ret = handle_mm_fault(vma, addr,
diff mbox series

Patch

diff --git a/mm/ksm.c b/mm/ksm.c
index 4d7bcf7da7c3..814c1a37c323 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -39,6 +39,7 @@ 
 #include <linux/freezer.h>
 #include <linux/oom.h>
 #include <linux/numa.h>
+#include <linux/pagewalk.h>
 
 #include <asm/tlbflush.h>
 #include "internal.h"
@@ -452,6 +453,60 @@  static inline bool ksm_test_exit(struct mm_struct *mm)
 	return atomic_read(&mm->mm_users) == 0;
 }
 
+int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
+			struct mm_walk *walk)
+{
+	/* We only care about page tables to walk to a single base page. */
+	if (pud_leaf(*pud) || !pud_present(*pud))
+		return 1;
+	return 0;
+}
+
+int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
+			struct mm_walk *walk)
+{
+	bool *ksm_page = walk->private;
+	struct page *page = NULL;
+	pte_t *pte, ptent;
+	spinlock_t *ptl;
+
+	/* We only care about page tables to walk to a single base page. */
+	if (pmd_leaf(*pmd) || !pmd_present(*pmd))
+		return 1;
+
+	/*
+	 * We only lookup a single page (a) no need to iterate; and (b)
+	 * always return 1 to exit immediately and not iterate in the caller.
+	 */
+	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+	ptent = *pte;
+
+	if (pte_none(ptent))
+		return 1;
+	if (!pte_present(ptent)) {
+		swp_entry_t entry = pte_to_swp_entry(ptent);
+
+		/*
+		 * We only care about migration of KSM pages. As KSM pages
+		 * remain KSM pages until freed, no need to wait here for
+		 * migration to end to identify such.
+		 */
+		if (is_migration_entry(entry))
+			page = pfn_swap_entry_to_page(entry);
+	} else {
+		page = vm_normal_page(walk->vma, addr, ptent);
+	}
+	if (page && PageKsm(page))
+		*ksm_page = true;
+	pte_unmap_unlock(pte, ptl);
+	return 1;
+}
+
+static const struct mm_walk_ops break_ksm_ops = {
+	.pud_entry = break_ksm_pud_entry,
+	.pmd_entry = break_ksm_pmd_entry,
+};
+
 /*
  * We use break_ksm to break COW on a ksm page by triggering unsharing,
  * such that the ksm page will get replaced by an exclusive anonymous page.
@@ -467,20 +522,19 @@  static inline bool ksm_test_exit(struct mm_struct *mm)
  */
 static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
 {
-	struct page *page;
 	vm_fault_t ret = 0;
 
+	if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)))
+		return -EINVAL;
+
 	do {
 		bool ksm_page = false;
 
 		cond_resched();
-		page = follow_page(vma, addr,
-				FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
-		if (IS_ERR_OR_NULL(page))
-			break;
-		if (PageKsm(page))
-			ksm_page = true;
-		put_page(page);
+		ret = walk_page_range_vma(vma, addr, addr + PAGE_SIZE,
+					  &break_ksm_ops, &ksm_page);
+		if (WARN_ON_ONCE(ret < 0))
+			return ret;
 
 		if (!ksm_page)
 			return 0;