diff mbox series

[v3,1/8] mm: Remove special swap entry functions

Message ID 20210226071832.31547-2-apopple@nvidia.com (mailing list archive)
State New, archived
Headers show
Series Add support for SVM atomics in Nouveau | expand

Commit Message

Alistair Popple Feb. 26, 2021, 7:18 a.m. UTC
Remove the migration and device private entry_to_page() and
entry_to_pfn() inline functions and instead open code them directly.
This results in shorter code which is easier to understand.

Signed-off-by: Alistair Popple <apopple@nvidia.com>
---
 arch/s390/mm/pgtable.c  |  2 +-
 fs/proc/task_mmu.c      | 23 +++++++------------
 include/linux/swap.h    |  4 ++--
 include/linux/swapops.h | 51 ++++-------------------------------------
 mm/hmm.c                |  5 ++--
 mm/memcontrol.c         |  2 +-
 mm/memory.c             | 10 ++++----
 mm/migrate.c            |  6 ++---
 mm/page_vma_mapped.c    |  6 ++---
 9 files changed, 30 insertions(+), 79 deletions(-)

Comments

Jason Gunthorpe March 1, 2021, 5:46 p.m. UTC | #1
On Fri, Feb 26, 2021 at 06:18:25PM +1100, Alistair Popple wrote:
> Remove the migration and device private entry_to_page() and
> entry_to_pfn() inline functions and instead open code them directly.
> This results in shorter code which is easier to understand.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> ---
>  arch/s390/mm/pgtable.c  |  2 +-
>  fs/proc/task_mmu.c      | 23 +++++++------------
>  include/linux/swap.h    |  4 ++--
>  include/linux/swapops.h | 51 ++++-------------------------------------
>  mm/hmm.c                |  5 ++--
>  mm/memcontrol.c         |  2 +-
>  mm/memory.c             | 10 ++++----
>  mm/migrate.c            |  6 ++---
>  mm/page_vma_mapped.c    |  6 ++---
>  9 files changed, 30 insertions(+), 79 deletions(-)

I wish you could come up with a more descriptive word that special
here

What I understand is this is true when the swap_offset is a pfn?

> -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> -{
> -	struct page *p = pfn_to_page(swp_offset(entry));
> -	/*
> -	 * Any use of migration entries may only occur while the
> -	 * corresponding page is locked
> -	 */
> -	BUG_ON(!PageLocked(compound_head(p)));
> -	return p;

And this constraint has been completely lost?

A comment in front of the is_special_entry explaining all the rule
would help alot

Transformation looks fine otherwise

Jason
Alistair Popple March 2, 2021, 12:21 a.m. UTC | #2
On Tuesday, 2 March 2021 4:46:42 AM AEDT Jason Gunthorpe wrote:
> 
> I wish you could come up with a more descriptive word that special
> here
> 
> What I understand is this is true when the swap_offset is a pfn?

Correct, and that points to a better name. Maybe is_pfn_swap_entry()? In which 
case adding a helper as Christoph suggested makes some more sense. Eg: 
pfn_swap_entry_to_page()

> > -static inline struct page *migration_entry_to_page(swp_entry_t entry)
> > -{
> > -	struct page *p = pfn_to_page(swp_offset(entry));
> > -	/*
> > -	 * Any use of migration entries may only occur while the
> > -	 * corresponding page is locked
> > -	 */
> > -	BUG_ON(!PageLocked(compound_head(p)));
> > -	return p;
> 
> And this constraint has been completely lost?

Yes, sorry I should have called that out. I didn't think loosing the check was 
a big deal, but I can add some checks to some of the call sites which would 
catch a page being incorrectly unlocked.

> A comment in front of the is_special_entry explaining all the rule
> would help alot

Will add one.

> Transformation looks fine otherwise

Thanks.

 - Alistair
 
> Jason
>
Alistair Popple March 2, 2021, 8:52 a.m. UTC | #3
On Saturday, 27 February 2021 2:59:09 AM AEDT Christoph Hellwig wrote:
> > -		struct page *page = migration_entry_to_page(entry);
> > +		struct page *page = pfn_to_page(swp_offset(entry));
> 
> I wonder if keeping a single special_entry_to_page() helper would still
> me a useful.  But I'm not entirely sure.  There are also two more open
> coded copies of this in the THP migration code.

I think it might be if only to clearly document where these entries are used. 
Will add it for the next version to see what it looks like.

> > -#define free_swap_and_cache(e) ({(is_migration_entry(e) || 
is_device_private_entry(e));})
> > -#define swapcache_prepare(e) ({(is_migration_entry(e) || 
is_device_private_entry(e));})
> > +#define free_swap_and_cache(e) is_special_entry(e)
> > +#define swapcache_prepare(e) is_special_entry(e)
> 
> Staring at this I'm really, really confused at what this is doing.
> 
> Looking a little closer these are the !CONFIG_SWAP stubs, but it could
> probably use a comment or two.

Will do, thanks.

 - Alistair
 
> >  	} else if (is_migration_entry(entry)) {
> > -		page = migration_entry_to_page(entry);
> > +		page = pfn_to_page(swp_offset(entry));
> >  
> >  		rss[mm_counter(page)]++;
> >  
> > @@ -737,7 +737,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct 
mm_struct *src_mm,
> >  			set_pte_at(src_mm, addr, src_pte, pte);
> >  		}
> >  	} else if (is_device_private_entry(entry)) {
> > -		page = device_private_entry_to_page(entry);
> > +		page = pfn_to_page(swp_offset(entry));
> >  
> >  		/*
> >  		 * Update rss count even for unaddressable pages, as
> > @@ -1274,7 +1274,7 @@ static unsigned long zap_pte_range(struct mmu_gather 
*tlb,
> >  
> >  		entry = pte_to_swp_entry(ptent);
> >  		if (is_device_private_entry(entry)) {
> > -			struct page *page = device_private_entry_to_page(entry);
> > +			struct page *page = pfn_to_page(swp_offset(entry));
> >  
> >  			if (unlikely(details && details->check_mapping)) {
> >  				/*
> > @@ -1303,7 +1303,7 @@ static unsigned long zap_pte_range(struct mmu_gather 
*tlb,
> >  		else if (is_migration_entry(entry)) {
> >  			struct page *page;
> >  
> > -			page = migration_entry_to_page(entry);
> > +			page = pfn_to_page(swp_offset(entry));
> >  			rss[mm_counter(page)]--;
> >  		}
> >  		if (unlikely(!free_swap_and_cache(entry)))
> > @@ -3271,7 +3271,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >  			migration_entry_wait(vma->vm_mm, vmf->pmd,
> >  					     vmf->address);
> >  		} else if (is_device_private_entry(entry)) {
> > -			vmf->page = device_private_entry_to_page(entry);
> > +			vmf->page = pfn_to_page(swp_offset(entry));
> >  			ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);
> >  		} else if (is_hwpoison_entry(entry)) {
> >  			ret = VM_FAULT_HWPOISON;
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 20ca887ea769..72adcc3d8f5b 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -321,7 +321,7 @@ void __migration_entry_wait(struct mm_struct *mm, 
pte_t *ptep,
> >  	if (!is_migration_entry(entry))
> >  		goto out;
> >  
> > -	page = migration_entry_to_page(entry);
> > +	page = pfn_to_page(swp_offset(entry));
> >  
> >  	/*
> >  	 * Once page cache replacement of page migration started, page_count
> > @@ -361,7 +361,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, 
pmd_t *pmd)
> >  	ptl = pmd_lock(mm, pmd);
> >  	if (!is_pmd_migration_entry(*pmd))
> >  		goto unlock;
> > -	page = migration_entry_to_page(pmd_to_swp_entry(*pmd));
> > +	page = pfn_to_page(swp_offset(pmd_to_swp_entry(*pmd)));
> >  	if (!get_page_unless_zero(page))
> >  		goto unlock;
> >  	spin_unlock(ptl);
> > @@ -2437,7 +2437,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> >  			if (!is_device_private_entry(entry))
> >  				goto next;
> >  
> > -			page = device_private_entry_to_page(entry);
> > +			page = pfn_to_page(swp_offset(entry));
> >  			if (!(migrate->flags &
> >  				MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
> >  			    page->pgmap->owner != migrate->pgmap_owner)
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index 86e3a3688d59..34230d08556a 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
> >  		if (!is_migration_entry(entry))
> >  			return false;
> >  
> > -		pfn = migration_entry_to_pfn(entry);
> > +		pfn = swp_offset(entry);
> >  	} else if (is_swap_pte(*pvmw->pte)) {
> >  		swp_entry_t entry;
> >  
> > @@ -105,7 +105,7 @@ static bool check_pte(struct page_vma_mapped_walk 
*pvmw)
> >  		if (!is_device_private_entry(entry))
> >  			return false;
> >  
> > -		pfn = device_private_entry_to_pfn(entry);
> > +		pfn = swp_offset(entry);
> >  	} else {
> >  		if (!pte_present(*pvmw->pte))
> >  			return false;
> > @@ -200,7 +200,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk 
*pvmw)
> >  				if (is_migration_entry(pmd_to_swp_entry(*pvmw->pmd))) {
> >  					swp_entry_t entry = pmd_to_swp_entry(*pvmw->pmd);
> >  
> > -					if (migration_entry_to_page(entry) != page)
> > +					if (pfn_to_page(swp_offset(entry)) != page)
> >  						return not_found(pvmw);
> >  					return true;
> >  				}
> ---end quoted text---
>
Alistair Popple March 2, 2021, 12:02 p.m. UTC | #4
On Tuesday, 2 March 2021 7:52:53 PM AEDT Alistair Popple wrote:
> On Saturday, 27 February 2021 2:59:09 AM AEDT Christoph Hellwig wrote:
> > > -		struct page *page = migration_entry_to_page(entry);
> > > +		struct page *page = pfn_to_page(swp_offset(entry));
> > 
> > I wonder if keeping a single special_entry_to_page() helper would still
> > me a useful.  But I'm not entirely sure.  There are also two more open
> > coded copies of this in the THP migration code.
> 
> I think it might be if only to clearly document where these entries are 
used. 
> Will add it for the next version to see what it looks like.

Actually the main advantage ends up being that it becomes easy to retain the 
page locked check for migration entries.

 - Alistair
diff mbox series

Patch

diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 18205f851c24..25c28d28ff61 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -691,7 +691,7 @@  static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry)
 	if (!non_swap_entry(entry))
 		dec_mm_counter(mm, MM_SWAPENTS);
 	else if (is_migration_entry(entry)) {
-		struct page *page = migration_entry_to_page(entry);
+		struct page *page = pfn_to_page(swp_offset(entry));
 
 		dec_mm_counter(mm, mm_counter(page));
 	}
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 602e3a52884d..ee9686f96a41 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -514,10 +514,8 @@  static void smaps_pte_entry(pte_t *pte, unsigned long addr,
 			} else {
 				mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT;
 			}
-		} else if (is_migration_entry(swpent))
-			page = migration_entry_to_page(swpent);
-		else if (is_device_private_entry(swpent))
-			page = device_private_entry_to_page(swpent);
+		} else if (is_special_entry(swpent))
+			page = pfn_to_page(swp_offset(swpent));
 	} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap
 							&& pte_none(*pte))) {
 		page = xa_load(&vma->vm_file->f_mapping->i_pages,
@@ -549,7 +547,7 @@  static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
 		swp_entry_t entry = pmd_to_swp_entry(*pmd);
 
 		if (is_migration_entry(entry))
-			page = migration_entry_to_page(entry);
+			page = pfn_to_page(swp_offset(entry));
 	}
 	if (IS_ERR_OR_NULL(page))
 		return;
@@ -691,10 +689,8 @@  static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
 	} else if (is_swap_pte(*pte)) {
 		swp_entry_t swpent = pte_to_swp_entry(*pte);
 
-		if (is_migration_entry(swpent))
-			page = migration_entry_to_page(swpent);
-		else if (is_device_private_entry(swpent))
-			page = device_private_entry_to_page(swpent);
+		if (is_special_entry(swpent))
+			page = pfn_to_page(swp_offset(swpent));
 	}
 	if (page) {
 		int mapcount = page_mapcount(page);
@@ -1382,11 +1378,8 @@  static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
 			frame = swp_type(entry) |
 				(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
 		flags |= PM_SWAP;
-		if (is_migration_entry(entry))
-			page = migration_entry_to_page(entry);
-
-		if (is_device_private_entry(entry))
-			page = device_private_entry_to_page(entry);
+		if (is_special_entry(entry))
+			page = pfn_to_page(swp_offset(entry));
 	}
 
 	if (page && !PageAnon(page))
@@ -1443,7 +1436,7 @@  static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
 			if (pmd_swp_soft_dirty(pmd))
 				flags |= PM_SOFT_DIRTY;
 			VM_BUG_ON(!is_pmd_migration_entry(pmd));
-			page = migration_entry_to_page(entry);
+			page = pfn_to_page(swp_offset(entry));
 		}
 #endif
 
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 596bc2f4d9b0..729c44e7c270 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -519,8 +519,8 @@  static inline void show_swap_cache_info(void)
 {
 }
 
-#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
-#define swapcache_prepare(e) ({(is_migration_entry(e) || is_device_private_entry(e));})
+#define free_swap_and_cache(e) is_special_entry(e)
+#define swapcache_prepare(e) is_special_entry(e)
 
 static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
 {
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index d9b7c9132c2f..80cfa3985045 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -121,16 +121,6 @@  static inline bool is_write_device_private_entry(swp_entry_t entry)
 {
 	return unlikely(swp_type(entry) == SWP_DEVICE_WRITE);
 }
-
-static inline unsigned long device_private_entry_to_pfn(swp_entry_t entry)
-{
-	return swp_offset(entry);
-}
-
-static inline struct page *device_private_entry_to_page(swp_entry_t entry)
-{
-	return pfn_to_page(swp_offset(entry));
-}
 #else /* CONFIG_DEVICE_PRIVATE */
 static inline swp_entry_t make_device_private_entry(struct page *page, bool write)
 {
@@ -150,16 +140,6 @@  static inline bool is_write_device_private_entry(swp_entry_t entry)
 {
 	return false;
 }
-
-static inline unsigned long device_private_entry_to_pfn(swp_entry_t entry)
-{
-	return 0;
-}
-
-static inline struct page *device_private_entry_to_page(swp_entry_t entry)
-{
-	return NULL;
-}
 #endif /* CONFIG_DEVICE_PRIVATE */
 
 #ifdef CONFIG_MIGRATION
@@ -182,22 +162,6 @@  static inline int is_write_migration_entry(swp_entry_t entry)
 	return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE);
 }
 
-static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
-{
-	return swp_offset(entry);
-}
-
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
-{
-	struct page *p = pfn_to_page(swp_offset(entry));
-	/*
-	 * Any use of migration entries may only occur while the
-	 * corresponding page is locked
-	 */
-	BUG_ON(!PageLocked(compound_head(p)));
-	return p;
-}
-
 static inline void make_migration_entry_read(swp_entry_t *entry)
 {
 	*entry = swp_entry(SWP_MIGRATION_READ, swp_offset(*entry));
@@ -217,16 +181,6 @@  static inline int is_migration_entry(swp_entry_t swp)
 	return 0;
 }
 
-static inline unsigned long migration_entry_to_pfn(swp_entry_t entry)
-{
-	return 0;
-}
-
-static inline struct page *migration_entry_to_page(swp_entry_t entry)
-{
-	return NULL;
-}
-
 static inline void make_migration_entry_read(swp_entry_t *entryp) { }
 static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
 					spinlock_t *ptl) { }
@@ -241,6 +195,11 @@  static inline int is_write_migration_entry(swp_entry_t entry)
 
 #endif
 
+static inline bool is_special_entry(swp_entry_t entry)
+{
+	return is_migration_entry(entry) || is_device_private_entry(entry);
+}
+
 struct page_vma_mapped_walk;
 
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
diff --git a/mm/hmm.c b/mm/hmm.c
index 943cb2ba4442..b10494b18b65 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -214,7 +214,7 @@  static inline bool hmm_is_device_private_entry(struct hmm_range *range,
 		swp_entry_t entry)
 {
 	return is_device_private_entry(entry) &&
-		device_private_entry_to_page(entry)->pgmap->owner ==
+		pfn_to_page(swp_offset(entry))->pgmap->owner ==
 		range->dev_private_owner;
 }
 
@@ -257,8 +257,7 @@  static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 			cpu_flags = HMM_PFN_VALID;
 			if (is_write_device_private_entry(entry))
 				cpu_flags |= HMM_PFN_WRITE;
-			*hmm_pfn = device_private_entry_to_pfn(entry) |
-					cpu_flags;
+			*hmm_pfn = swp_offset(entry) | cpu_flags;
 			return 0;
 		}
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 913c2b9e5c72..1e6a318de8c2 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5544,7 +5544,7 @@  static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,
 	 * as special swap entry in the CPU page table.
 	 */
 	if (is_device_private_entry(ent)) {
-		page = device_private_entry_to_page(ent);
+		page = pfn_to_page(swp_offset(ent));
 		/*
 		 * MEMORY_DEVICE_PRIVATE means ZONE_DEVICE page and which have
 		 * a refcount of 1 when free (unlike normal page)
diff --git a/mm/memory.c b/mm/memory.c
index feff48e1465a..817d7a11ab7e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -718,7 +718,7 @@  copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		}
 		rss[MM_SWAPENTS]++;
 	} else if (is_migration_entry(entry)) {
-		page = migration_entry_to_page(entry);
+		page = pfn_to_page(swp_offset(entry));
 
 		rss[mm_counter(page)]++;
 
@@ -737,7 +737,7 @@  copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 			set_pte_at(src_mm, addr, src_pte, pte);
 		}
 	} else if (is_device_private_entry(entry)) {
-		page = device_private_entry_to_page(entry);
+		page = pfn_to_page(swp_offset(entry));
 
 		/*
 		 * Update rss count even for unaddressable pages, as
@@ -1274,7 +1274,7 @@  static unsigned long zap_pte_range(struct mmu_gather *tlb,
 
 		entry = pte_to_swp_entry(ptent);
 		if (is_device_private_entry(entry)) {
-			struct page *page = device_private_entry_to_page(entry);
+			struct page *page = pfn_to_page(swp_offset(entry));
 
 			if (unlikely(details && details->check_mapping)) {
 				/*
@@ -1303,7 +1303,7 @@  static unsigned long zap_pte_range(struct mmu_gather *tlb,
 		else if (is_migration_entry(entry)) {
 			struct page *page;
 
-			page = migration_entry_to_page(entry);
+			page = pfn_to_page(swp_offset(entry));
 			rss[mm_counter(page)]--;
 		}
 		if (unlikely(!free_swap_and_cache(entry)))
@@ -3271,7 +3271,7 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 			migration_entry_wait(vma->vm_mm, vmf->pmd,
 					     vmf->address);
 		} else if (is_device_private_entry(entry)) {
-			vmf->page = device_private_entry_to_page(entry);
+			vmf->page = pfn_to_page(swp_offset(entry));
 			ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);
 		} else if (is_hwpoison_entry(entry)) {
 			ret = VM_FAULT_HWPOISON;
diff --git a/mm/migrate.c b/mm/migrate.c
index 20ca887ea769..72adcc3d8f5b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -321,7 +321,7 @@  void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
 	if (!is_migration_entry(entry))
 		goto out;
 
-	page = migration_entry_to_page(entry);
+	page = pfn_to_page(swp_offset(entry));
 
 	/*
 	 * Once page cache replacement of page migration started, page_count
@@ -361,7 +361,7 @@  void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd)
 	ptl = pmd_lock(mm, pmd);
 	if (!is_pmd_migration_entry(*pmd))
 		goto unlock;
-	page = migration_entry_to_page(pmd_to_swp_entry(*pmd));
+	page = pfn_to_page(swp_offset(pmd_to_swp_entry(*pmd)));
 	if (!get_page_unless_zero(page))
 		goto unlock;
 	spin_unlock(ptl);
@@ -2437,7 +2437,7 @@  static int migrate_vma_collect_pmd(pmd_t *pmdp,
 			if (!is_device_private_entry(entry))
 				goto next;
 
-			page = device_private_entry_to_page(entry);
+			page = pfn_to_page(swp_offset(entry));
 			if (!(migrate->flags &
 				MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
 			    page->pgmap->owner != migrate->pgmap_owner)
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 86e3a3688d59..34230d08556a 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -96,7 +96,7 @@  static bool check_pte(struct page_vma_mapped_walk *pvmw)
 		if (!is_migration_entry(entry))
 			return false;
 
-		pfn = migration_entry_to_pfn(entry);
+		pfn = swp_offset(entry);
 	} else if (is_swap_pte(*pvmw->pte)) {
 		swp_entry_t entry;
 
@@ -105,7 +105,7 @@  static bool check_pte(struct page_vma_mapped_walk *pvmw)
 		if (!is_device_private_entry(entry))
 			return false;
 
-		pfn = device_private_entry_to_pfn(entry);
+		pfn = swp_offset(entry);
 	} else {
 		if (!pte_present(*pvmw->pte))
 			return false;
@@ -200,7 +200,7 @@  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 				if (is_migration_entry(pmd_to_swp_entry(*pvmw->pmd))) {
 					swp_entry_t entry = pmd_to_swp_entry(*pvmw->pmd);
 
-					if (migration_entry_to_page(entry) != page)
+					if (pfn_to_page(swp_offset(entry)) != page)
 						return not_found(pvmw);
 					return true;
 				}