diff mbox series

[next] mm/migrate: fix remove_migration_pte() of hugetlb entry

Message ID bd28ebcf-4d42-7184-8189-ffed6fe7d4dc@google.com (mailing list archive)
State New
Headers show
Series [next] mm/migrate: fix remove_migration_pte() of hugetlb entry | expand

Commit Message

Hugh Dickins Feb. 27, 2022, 2:25 a.m. UTC
The foliation of remove_migration_pte() is currently wrong on hugetlb
anon entries, causing LTP move_pages12 to crash on BUG_ON(!PageLocked)
in hugepage_add_anon_rmap().

Fixes: b4010e88f071 ("mm/migrate: Convert remove_migration_ptes() to folios")
Signed-off-by: Hugh Dickins <hughd@google.com>
---
Please just fold in if you agree.

 mm/migrate.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox (Oracle) Feb. 28, 2022, 8:49 p.m. UTC | #1
On Sat, Feb 26, 2022 at 06:25:15PM -0800, Hugh Dickins wrote:
> -		if (!folio_test_ksm(folio))
> +		/* Skip call in common case, plus .pgoff is invalid for KSM */
> +		if (pvmw.nr_pages != 1 && !folio_test_hugetlb(folio))
>  			idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;

How do you feel about this instead?

-               if (!folio_test_ksm(folio))
+               /* pgoff is invalid for ksm pages, but they are never large */
+               if (folio_test_large(folio) && !folio_test_hugetlb(folio))
                        idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
Hugh Dickins Feb. 28, 2022, 10:18 p.m. UTC | #2
On Mon, 28 Feb 2022, Matthew Wilcox wrote:
> On Sat, Feb 26, 2022 at 06:25:15PM -0800, Hugh Dickins wrote:
> > -		if (!folio_test_ksm(folio))
> > +		/* Skip call in common case, plus .pgoff is invalid for KSM */
> > +		if (pvmw.nr_pages != 1 && !folio_test_hugetlb(folio))
> >  			idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
> 
> How do you feel about this instead?
> 
> -               if (!folio_test_ksm(folio))
> +               /* pgoff is invalid for ksm pages, but they are never large */
> +               if (folio_test_large(folio) && !folio_test_hugetlb(folio))
>                         idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
> 

That looks nicer to me too.  I'll assume that's what you will add
or squash in your tree, and no need for me to resend - thanks.

Hugh
diff mbox series

Patch

--- mmotm/mm/migrate.c
+++ linux/mm/migrate.c
@@ -182,7 +182,8 @@  static bool remove_migration_pte(struct
 		struct page *new;
 		unsigned long idx = 0;
 
-		if (!folio_test_ksm(folio))
+		/* Skip call in common case, plus .pgoff is invalid for KSM */
+		if (pvmw.nr_pages != 1 && !folio_test_hugetlb(folio))
 			idx = linear_page_index(vma, pvmw.address) - pvmw.pgoff;
 		new = folio_page(folio, idx);