diff mbox series

[STABLE,v5.15.y] mm/migrate: set swap entry values of THP tail pages properly.

Message ID 20240305161941.92021-1-zi.yan@sent.com (mailing list archive)
State New
Headers show
Series [STABLE,v5.15.y] mm/migrate: set swap entry values of THP tail pages properly. | expand

Commit Message

Zi Yan March 5, 2024, 4:19 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

The tail pages in a THP can have swap entry information stored in their
private field. When migrating to a new page, all tail pages of the new
page need to update ->private to avoid future data corruption.

Corresponding swapcache entries need to be updated as well.
e71769ae5260 ("mm: enable thp migration for shmem thp") fixed it already.

Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 mm/migrate.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

David Hildenbrand March 5, 2024, 4:22 p.m. UTC | #1
On 05.03.24 17:19, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> The tail pages in a THP can have swap entry information stored in their
> private field. When migrating to a new page, all tail pages of the new
> page need to update ->private to avoid future data corruption.
> 
> Corresponding swapcache entries need to be updated as well.
> e71769ae5260 ("mm: enable thp migration for shmem thp") fixed it already.
> 
> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   mm/migrate.c | 6 +++++-
>   1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index c7d5566623ad..c37af50f312d 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -424,8 +424,12 @@ int migrate_page_move_mapping(struct address_space *mapping,
>   	if (PageSwapBacked(page)) {
>   		__SetPageSwapBacked(newpage);
>   		if (PageSwapCache(page)) {
> +			int i;
> +
>   			SetPageSwapCache(newpage);
> -			set_page_private(newpage, page_private(page));
> +			for (i = 0; i < (1 << compound_order(page)); i++)
> +				set_page_private(newpage + i,
> +						 page_private(page + i));
>   		}
>   	} else {
>   		VM_BUG_ON_PAGE(PageSwapCache(page), page);

Acked-by: David Hildenbrand <david@redhat.com>
Zi Yan March 5, 2024, 4:33 p.m. UTC | #2
On 5 Mar 2024, at 11:19, Zi Yan wrote:

> From: Zi Yan <ziy@nvidia.com>
>
> The tail pages in a THP can have swap entry information stored in their
> private field. When migrating to a new page, all tail pages of the new
> page need to update ->private to avoid future data corruption.
>
> Corresponding swapcache entries need to be updated as well.
> e71769ae5260 ("mm: enable thp migration for shmem thp") fixed it already.
>

Closes: https://lore.kernel.org/linux-mm/1707814102-22682-1-git-send-email-quic_charante@quicinc.com/

> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>  mm/migrate.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index c7d5566623ad..c37af50f312d 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -424,8 +424,12 @@ int migrate_page_move_mapping(struct address_space *mapping,
>  	if (PageSwapBacked(page)) {
>  		__SetPageSwapBacked(newpage);
>  		if (PageSwapCache(page)) {
> +			int i;
> +
>  			SetPageSwapCache(newpage);
> -			set_page_private(newpage, page_private(page));
> +			for (i = 0; i < (1 << compound_order(page)); i++)
> +				set_page_private(newpage + i,
> +						 page_private(page + i));
>  		}
>  	} else {
>  		VM_BUG_ON_PAGE(PageSwapCache(page), page);
> -- 
> 2.43.0


--
Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index c7d5566623ad..c37af50f312d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -424,8 +424,12 @@  int migrate_page_move_mapping(struct address_space *mapping,
 	if (PageSwapBacked(page)) {
 		__SetPageSwapBacked(newpage);
 		if (PageSwapCache(page)) {
+			int i;
+
 			SetPageSwapCache(newpage);
-			set_page_private(newpage, page_private(page));
+			for (i = 0; i < (1 << compound_order(page)); i++)
+				set_page_private(newpage + i,
+						 page_private(page + i));
 		}
 	} else {
 		VM_BUG_ON_PAGE(PageSwapCache(page), page);