Message ID | 20240305163213.95119-1-zi.yan@sent.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [STABLE,v4.19.y] mm/migrate: set swap entry values of THP tail pages properly. | expand |
On 05.03.24 17:32, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > The tail pages in a THP can have swap entry information stored in their > private field. When migrating to a new page, all tail pages of the new > page need to update ->private to avoid future data corruption. > > Corresponding swapcache entries need to be updated as well. > e71769ae5260 ("mm: enable thp migration for shmem thp") fixed it already. > > Closes: https://lore.kernel.org/linux-mm/1707814102-22682-1-git-send-email-quic_charante@quicinc.com/ > Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path") > Signed-off-by: Zi Yan <ziy@nvidia.com> > --- > mm/migrate.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 171573613c39..893ea04498f7 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -514,8 +514,12 @@ int migrate_page_move_mapping(struct address_space *mapping, > if (PageSwapBacked(page)) { > __SetPageSwapBacked(newpage); > if (PageSwapCache(page)) { > + int i; > + > SetPageSwapCache(newpage); > - set_page_private(newpage, page_private(page)); > + for (i = 0; i < (1 << compound_order(page)); i++) > + set_page_private(newpage + i, > + page_private(page + i)); > } > } else { > VM_BUG_ON_PAGE(PageSwapCache(page), page); Thanks for taking care of all of these! Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/migrate.c b/mm/migrate.c index 171573613c39..893ea04498f7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -514,8 +514,12 @@ int migrate_page_move_mapping(struct address_space *mapping, if (PageSwapBacked(page)) { __SetPageSwapBacked(newpage); if (PageSwapCache(page)) { + int i; + SetPageSwapCache(newpage); - set_page_private(newpage, page_private(page)); + for (i = 0; i < (1 << compound_order(page)); i++) + set_page_private(newpage + i, + page_private(page + i)); } } else { VM_BUG_ON_PAGE(PageSwapCache(page), page);