diff mbox series

[STABLE,v6.1.y] mm/migrate: set swap entry values of THP tail pages properly.

Message ID 20240306155217.118467-1-zi.yan@sent.com (mailing list archive)
State New
Headers show
Series [STABLE,v6.1.y] mm/migrate: set swap entry values of THP tail pages properly. | expand

Commit Message

Zi Yan March 6, 2024, 3:52 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

The tail pages in a THP can have swap entry information stored in their
private field. When migrating to a new page, all tail pages of the new
page need to update ->private to avoid future data corruption.

This fix is stable-only, since after commit 07e09c483cbe ("mm/huge_memory:
work on folio->swap instead of page->private when splitting folio"),
subpages of a swapcached THP no longer requires the maintenance.

Adding THPs to the swapcache was introduced in commit
38d8b4e6bdc87 ("mm, THP, swap: delay splitting THP during swap out"),
where each subpage of a THP added to the swapcache had its own swapcache
entry and required the ->private field to point to the correct swapcache
entry. Later, when THP migration functionality was implemented in commit
616b8371539a6 ("mm: thp: enable thp migration in generic path"),
it initially did not handle the subpages of swapcached THPs, failing to
update their ->private fields or replace the subpage pointers in the
swapcache. Subsequently, commit e71769ae5260 ("mm: enable thp migration
for shmem thp") addressed the swapcache update aspect. This patch fixes
the update of subpage ->private fields.

Closes: https://lore.kernel.org/linux-mm/1707814102-22682-1-git-send-email-quic_charante@quicinc.com/
Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
Signed-off-by: Zi Yan <ziy@nvidia.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/migrate.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Charan Teja Kalla March 13, 2024, 11:23 a.m. UTC | #1
On 3/6/2024 9:22 PM, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> The tail pages in a THP can have swap entry information stored in their
> private field. When migrating to a new page, all tail pages of the new
> page need to update ->private to avoid future data corruption.
> 
> This fix is stable-only, since after commit 07e09c483cbe ("mm/huge_memory:
> work on folio->swap instead of page->private when splitting folio"),
> subpages of a swapcached THP no longer requires the maintenance.
> 
> Adding THPs to the swapcache was introduced in commit
> 38d8b4e6bdc87 ("mm, THP, swap: delay splitting THP during swap out"),
> where each subpage of a THP added to the swapcache had its own swapcache
> entry and required the ->private field to point to the correct swapcache
> entry. Later, when THP migration functionality was implemented in commit
> 616b8371539a6 ("mm: thp: enable thp migration in generic path"),
> it initially did not handle the subpages of swapcached THPs, failing to
> update their ->private fields or replace the subpage pointers in the
> swapcache. Subsequently, commit e71769ae5260 ("mm: enable thp migration
> for shmem thp") addressed the swapcache update aspect. This patch fixes
> the update of subpage ->private fields.
> 
> Closes: https://lore.kernel.org/linux-mm/1707814102-22682-1-git-send-email-quic_charante@quicinc.com/
> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Acked-by: David Hildenbrand <david@redhat.com>

Tested this patch for 6.1 kernel and observed no issues. With that,

Reported-and-tested-by: Charan Teja Kalla <quic_charante@quicinc.com>

Thanks,
Charan
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index c93dd6a31c31..c5968021fde0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -423,8 +423,12 @@  int folio_migrate_mapping(struct address_space *mapping,
 	if (folio_test_swapbacked(folio)) {
 		__folio_set_swapbacked(newfolio);
 		if (folio_test_swapcache(folio)) {
+			int i;
+
 			folio_set_swapcache(newfolio);
-			newfolio->private = folio_get_private(folio);
+			for (i = 0; i < nr; i++)
+				set_page_private(folio_page(newfolio, i),
+					page_private(folio_page(folio, i)));
 		}
 		entries = nr;
 	} else {