diff mbox series

[v1,2/2] mm/userfaultfd: don't consider uffd-wp bit of writable migration entries

Message ID 20230405142535.493854-3-david@redhat.com (mailing list archive)
State New
Headers show
Series mm/userfaultfd: fix and cleanup for migration entries with uffd-wp | expand

Commit Message

David Hildenbrand April 5, 2023, 2:25 p.m. UTC
If we end up with a writable migration entry that has the uffd-wp bit set,
we already messed up: the source PTE/PMD was writable, which means we could
have modified the page without notifying uffd first. Setting the uffd-wp
bit always implies converting migration entries to !writable migration
entries.

Commit 8f34f1eac382 ("mm/userfaultfd: fix uffd-wp special cases for
fork()") documents that "3. Forget to carry over uffd-wp bit for a write
migration huge pmd entry", but it doesn't really say why that should be
relevant.

So let's remove that code to avoid hiding an eventual underlying issue
(in the future, we might want to warn when creating writable migration
 entries that have the uffd-wp bit set -- or even better when turning a
 PTE writable that still has the uffd-wp bit set).

This now matches the handling for hugetlb migration entries in
hugetlb_change_protection().

In copy_huge_pmd()/copy_nonpresent_pte()/copy_hugetlb_page_range(), we
still transfer the uffd-bit also for writable migration entries, but simply
because we have unified handling for "writable" and "readable-exclusive"
migration entries, and we care about transferring the uffd-wp bit for
the latter.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/huge_memory.c | 2 --
 mm/mprotect.c    | 2 --
 2 files changed, 4 deletions(-)

Comments

Peter Xu April 5, 2023, 3:15 p.m. UTC | #1
On Wed, Apr 05, 2023 at 04:25:35PM +0200, David Hildenbrand wrote:
> If we end up with a writable migration entry that has the uffd-wp bit set,
> we already messed up: the source PTE/PMD was writable, which means we could
> have modified the page without notifying uffd first. Setting the uffd-wp
> bit always implies converting migration entries to !writable migration
> entries.
> 
> Commit 8f34f1eac382 ("mm/userfaultfd: fix uffd-wp special cases for
> fork()") documents that "3. Forget to carry over uffd-wp bit for a write
> migration huge pmd entry", but it doesn't really say why that should be
> relevant.
> 
> So let's remove that code to avoid hiding an eventual underlying issue
> (in the future, we might want to warn when creating writable migration
>  entries that have the uffd-wp bit set -- or even better when turning a
>  PTE writable that still has the uffd-wp bit set).
> 
> This now matches the handling for hugetlb migration entries in
> hugetlb_change_protection().
> 
> In copy_huge_pmd()/copy_nonpresent_pte()/copy_hugetlb_page_range(), we
> still transfer the uffd-bit also for writable migration entries, but simply
> because we have unified handling for "writable" and "readable-exclusive"
> migration entries, and we care about transferring the uffd-wp bit for
> the latter.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

I think that's mostly for sanity to carry over one generic bit between
present <-> !present, even if uffd-wp is not that generic and currently
closely bound to write bit.

E.g., we will need to be more careful when we want to change the meaning of
uffd-wp bit some day, but that'll always be challenging anyway, so not
something this will change.

Thanks,
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index bdda4f426d58..f977c965fdad 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1853,8 +1853,6 @@  int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			newpmd = swp_entry_to_pmd(entry);
 			if (pmd_swp_soft_dirty(*pmd))
 				newpmd = pmd_swp_mksoft_dirty(newpmd);
-			if (pmd_swp_uffd_wp(*pmd))
-				newpmd = pmd_swp_mkuffd_wp(newpmd);
 		} else {
 			newpmd = *pmd;
 		}
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 13e84d8c0797..e04e9ea62ae7 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -223,8 +223,6 @@  static long change_pte_range(struct mmu_gather *tlb,
 				newpte = swp_entry_to_pte(entry);
 				if (pte_swp_soft_dirty(oldpte))
 					newpte = pte_swp_mksoft_dirty(newpte);
-				if (pte_swp_uffd_wp(oldpte))
-					newpte = pte_swp_mkuffd_wp(newpte);
 			} else if (is_writable_device_private_entry(entry)) {
 				/*
 				 * We do not preserve soft-dirtiness. See