diff mbox series

[2/2] mm/rmap: Fixup copying of soft dirty and uffd ptes

Message ID 20200824083128.12684-2-alistair@popple.id.au (mailing list archive)
State New, archived
Headers show
Series [1/2] mm/migrate: Fixup setting UFFD_WP flag | expand

Commit Message

Alistair Popple Aug. 24, 2020, 8:31 a.m. UTC
During memory migration a pte is temporarily replaced with a migration
swap pte. Some pte bits from the existing mapping such as the soft-dirty
and uffd write-protect bits are preserved by copying these to the
temporary migration swap pte.

However these bits are not stored at the same location for swap and
non-swap ptes. Therefore testing these bits requires using the
appropriate helper function for the given pte type.

Unfortunately several code locations were found where the wrong helper
function is being used to test soft_dirty and uffd_wp bits which leads
to them getting incorrectly set or cleared during page-migration.

Fix these by using the correct tests based on pte type.

Fixes: a5430dda8a3a ("mm/migrate: support un-addressable ZONE_DEVICE page in migration")
Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
Fixes: f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration")
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Cc: stable@vger.kernel.org
---
 mm/migrate.c | 6 ++++--
 mm/rmap.c    | 9 +++++++--
 2 files changed, 11 insertions(+), 4 deletions(-)

Comments

Peter Xu Aug. 24, 2020, 3:43 p.m. UTC | #1
On Mon, Aug 24, 2020 at 06:31:28PM +1000, Alistair Popple wrote:
> During memory migration a pte is temporarily replaced with a migration
> swap pte. Some pte bits from the existing mapping such as the soft-dirty
> and uffd write-protect bits are preserved by copying these to the
> temporary migration swap pte.
> 
> However these bits are not stored at the same location for swap and
> non-swap ptes. Therefore testing these bits requires using the
> appropriate helper function for the given pte type.
> 
> Unfortunately several code locations were found where the wrong helper
> function is being used to test soft_dirty and uffd_wp bits which leads
> to them getting incorrectly set or cleared during page-migration.
> 
> Fix these by using the correct tests based on pte type.
> 
> Fixes: a5430dda8a3a ("mm/migrate: support un-addressable ZONE_DEVICE page in migration")
> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
> Fixes: f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration")
> Signed-off-by: Alistair Popple <alistair@popple.id.au>
> Cc: stable@vger.kernel.org
> ---
>  mm/migrate.c | 6 ++++--
>  mm/rmap.c    | 9 +++++++--
>  2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index ddb64253fe3e..5bea19c496af 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2427,9 +2427,11 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>  			entry = make_migration_entry(page, mpfn &
>  						     MIGRATE_PFN_WRITE);
>  			swp_pte = swp_entry_to_pte(entry);
> -			if (pte_soft_dirty(pte))
> +			if ((is_swap_pte(pte) && pte_swp_soft_dirty(pte))
> +				|| (!is_swap_pte(pte) && pte_soft_dirty(pte)))
>  				swp_pte = pte_swp_mksoft_dirty(swp_pte);
> -			if (pte_uffd_wp(pte))
> +			if ((is_swap_pte(pte) && pte_swp_uffd_wp(pte))
> +				|| (!is_swap_pte(pte) && pte_uffd_wp(pte)))
>  				swp_pte = pte_swp_mkuffd_wp(swp_pte);
>  			set_pte_at(mm, addr, ptep, swp_pte);

The worst case is we'll call is_swap_pte() four times for each entry. Also
considering we know it's not a pte_none() when reach here, how about:

  if (pte_present(pte)) {
    // pte handling of both soft dirty and uffd-wp
  } else {
    // swap handling of both soft dirty and uffd-wp
  }

?

Thanks,
Alistair Popple Aug. 25, 2020, 5:19 a.m. UTC | #2
On Tuesday, 25 August 2020 1:43:59 AM AEST Peter Xu wrote:
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -2427,9 +2427,11 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> > 
> >  			entry = make_migration_entry(page, mpfn &
> >  			
> >  						     MIGRATE_PFN_WRITE);
> >  			
> >  			swp_pte = swp_entry_to_pte(entry);
> > 
> > -			if (pte_soft_dirty(pte))
> > +			if ((is_swap_pte(pte) && pte_swp_soft_dirty(pte))
> > +				|| (!is_swap_pte(pte) && pte_soft_dirty(pte)))
> > 
> >  				swp_pte = pte_swp_mksoft_dirty(swp_pte);
> > 
> > -			if (pte_uffd_wp(pte))
> > +			if ((is_swap_pte(pte) && pte_swp_uffd_wp(pte))
> > +				|| (!is_swap_pte(pte) && pte_uffd_wp(pte)))
> > 
> >  				swp_pte = pte_swp_mkuffd_wp(swp_pte);
> >  			
> >  			set_pte_at(mm, addr, ptep, swp_pte);
> 
> The worst case is we'll call is_swap_pte() four times for each entry. Also
> considering we know it's not a pte_none() when reach here, how about:
> 
>   if (pte_present(pte)) {
>     // pte handling of both soft dirty and uffd-wp
>   } else {
>     // swap handling of both soft dirty and uffd-wp
>   }
> 
> ?

Works for me, I'd missed we knew it was not a pte_none() so will respin. 
Thanks!

 - Alistair
 
> Thanks,
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index ddb64253fe3e..5bea19c496af 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2427,9 +2427,11 @@  static int migrate_vma_collect_pmd(pmd_t *pmdp,
 			entry = make_migration_entry(page, mpfn &
 						     MIGRATE_PFN_WRITE);
 			swp_pte = swp_entry_to_pte(entry);
-			if (pte_soft_dirty(pte))
+			if ((is_swap_pte(pte) && pte_swp_soft_dirty(pte))
+				|| (!is_swap_pte(pte) && pte_soft_dirty(pte)))
 				swp_pte = pte_swp_mksoft_dirty(swp_pte);
-			if (pte_uffd_wp(pte))
+			if ((is_swap_pte(pte) && pte_swp_uffd_wp(pte))
+				|| (!is_swap_pte(pte) && pte_uffd_wp(pte)))
 				swp_pte = pte_swp_mkuffd_wp(swp_pte);
 			set_pte_at(mm, addr, ptep, swp_pte);
 
diff --git a/mm/rmap.c b/mm/rmap.c
index 83cc459edc40..9425260774a1 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1511,9 +1511,14 @@  static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			 */
 			entry = make_migration_entry(page, 0);
 			swp_pte = swp_entry_to_pte(entry);
-			if (pte_soft_dirty(pteval))
+
+			/*
+			 * pteval maps a zone device page and is therefore
+			 * a swap pte.
+			 */
+			if (pte_swp_soft_dirty(pteval))
 				swp_pte = pte_swp_mksoft_dirty(swp_pte);
-			if (pte_uffd_wp(pteval))
+			if (pte_swp_uffd_wp(pteval))
 				swp_pte = pte_swp_mkuffd_wp(swp_pte);
 			set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte);
 			/*