diff mbox series

[v1,3/4] smaps: use vm_normal_page_pmd() instead of follow_trans_huge_pmd()

Message ID 20230727212845.135673-4-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series smaps / mm/gup: fix gup_can_follow_protnone fallout | expand

Commit Message

David Hildenbrand July 27, 2023, 9:28 p.m. UTC
We really shouldn't be using a GUP-internal helper if it can be avoided,
and avoiding the FOLL_FORCE here is certainly desirable.

Similar to smaps_pte_entry() that uses vm_normal_page(), let's use
vm_normal_page_pmd() -- that didn't exist back when we introduced that
code -- that similarly refuses to return the huge zeropage.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 fs/proc/task_mmu.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)
diff mbox series

Patch

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 7075ce11dc7d..b8ea270bf68b 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -571,12 +571,7 @@  static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
 	bool migration = false;
 
 	if (pmd_present(*pmd)) {
-		/*
-		 * FOLL_DUMP will return -EFAULT on huge zero page
-		 * FOLL_FORCE follow a PROT_NONE mapped page
-		 */
-		page = follow_trans_huge_pmd(vma, addr, pmd,
-					     FOLL_DUMP | FOLL_FORCE);
+		page = vm_normal_page_pmd(vma, addr, *pmd);
 	} else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
 		swp_entry_t entry = pmd_to_swp_entry(*pmd);