diff mbox series

[RFC,4/5] mm: add generic type support for device zone page migration

Message ID 20210527230809.3701-5-Felix.Kuehling@amd.com (mailing list archive)
State New, archived
Headers show
Series Support DEVICE_GENERIC memory in migrate_vma_* | expand

Commit Message

Felix Kuehling May 27, 2021, 11:08 p.m. UTC
From: Alex Sierra <alex.sierra@amd.com>

This support is only for generic type anonymous memory.
Generic type with zone device pages require to take an extra reference,
as it's done with device private type.
Also, support added to migrate pages meta-data for generic device type.

Signed-off-by: Alex Sierra <alex.sierra@amd.com>
---
 mm/migrate.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

Comments

Christoph Hellwig May 29, 2021, 6:40 a.m. UTC | #1
On Thu, May 27, 2021 at 07:08:08PM -0400, Felix Kuehling wrote:
> -	expected_count += is_device_private_page(page);
> +	expected_count +=
> +			(is_device_private_page(page) || is_device_generic_page(page));

Please avoid the completely unreadable overly long lines.  And given
how oftへn this check is duplicated you probably really want a helper.
And properly document it while you're at it.
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 20ca887ea769..33e573a992e5 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -380,7 +380,8 @@  static int expected_page_refs(struct address_space *mapping, struct page *page)
 	 * Device private pages have an extra refcount as they are
 	 * ZONE_DEVICE pages.
 	 */
-	expected_count += is_device_private_page(page);
+	expected_count +=
+			(is_device_private_page(page) || is_device_generic_page(page));
 	if (mapping)
 		expected_count += thp_nr_pages(page) + page_has_private(page);
 
@@ -2607,7 +2608,7 @@  static bool migrate_vma_check_page(struct page *page)
 		 * FIXME proper solution is to rework migration_entry_wait() so
 		 * it does not need to take a reference on page.
 		 */
-		return is_device_private_page(page);
+		return is_device_private_page(page) | is_device_generic_page(page);
 	}
 
 	/* For file back page */
@@ -3069,10 +3070,12 @@  void migrate_vma_pages(struct migrate_vma *migrate)
 		mapping = page_mapping(page);
 
 		if (is_zone_device_page(newpage)) {
-			if (is_device_private_page(newpage)) {
+			if (is_device_private_page(newpage) ||
+			    is_device_generic_page(newpage)) {
 				/*
-				 * For now only support private anonymous when
-				 * migrating to un-addressable device memory.
+				 * For now only support private and devdax/generic
+				 * anonymous when migrating to un-addressable
+				 * device memory.
 				 */
 				if (mapping) {
 					migrate->src[i] &= ~MIGRATE_PFN_MIGRATE;