diff mbox series

[v2,4/6] mm/migrate_device: convert __migrate_device_pages() to folios

Message ID 20240216211320.222431-4-sidhartha.kumar@oracle.com (mailing list archive)
State New
Headers show
Series [v2,1/6] mm/migrate: introduce migrate_pfn_to_folio() | expand

Commit Message

Sidhartha Kumar Feb. 16, 2024, 9:13 p.m. UTC
Use migrate_pfn_to_folio() so we can work with folios directly in
__migrate_device_pages().

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/migrate_device.c | 36 +++++++++++++++---------------------
 1 file changed, 15 insertions(+), 21 deletions(-)

Comments

Matthew Wilcox Feb. 16, 2024, 9:55 p.m. UTC | #1
On Fri, Feb 16, 2024 at 01:13:18PM -0800, Sidhartha Kumar wrote:
> Use migrate_pfn_to_folio() so we can work with folios directly in
> __migrate_device_pages().

i don't understand why this would be correct if we have multipage
folios.

> @@ -719,33 +719,29 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>  					migrate->pgmap_owner);
>  				mmu_notifier_invalidate_range_start(&range);
>  			}
> -			migrate_vma_insert_page(migrate, addr, newpage,
> +			migrate_vma_insert_page(migrate, addr, &dst->page,

seems to me that a migration pfn is going to refer to a precise page.
now you're telling it to insert the head page of the folio.  isn't this
wrong?

> @@ -753,13 +749,11 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>  			continue;
>  		}
>  
> -		if (migrate && migrate->fault_page == page)
> -			r = migrate_folio_extra(mapping, page_folio(newpage),
> -						page_folio(page),
> -						MIGRATE_SYNC_NO_COPY, 1);
> +		if (migrate && migrate->fault_page == &src->page)

shouldn't this rather be "page_folio(migrate->fault_page) == src"?
ie we're looking for two pages from the same folio, rather than the page
being the same as the head page of the folio?
Sidhartha Kumar Feb. 16, 2024, 10 p.m. UTC | #2
On 2/16/24 1:55 PM, Matthew Wilcox wrote:
> On Fri, Feb 16, 2024 at 01:13:18PM -0800, Sidhartha Kumar wrote:
>> Use migrate_pfn_to_folio() so we can work with folios directly in
>> __migrate_device_pages().
> 
> i don't understand why this would be correct if we have multipage
> folios.
> 

Alistair mentioned that he is working on order > 0 device page support so I was 
under the impression that currently device pages are only order 0.

Thanks,
Sid

>> @@ -719,33 +719,29 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>>   					migrate->pgmap_owner);
>>   				mmu_notifier_invalidate_range_start(&range);
>>   			}
>> -			migrate_vma_insert_page(migrate, addr, newpage,
>> +			migrate_vma_insert_page(migrate, addr, &dst->page,
> 
> seems to me that a migration pfn is going to refer to a precise page.
> now you're telling it to insert the head page of the folio.  isn't this
> wrong?
> 
>> @@ -753,13 +749,11 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>>   			continue;
>>   		}
>>   
>> -		if (migrate && migrate->fault_page == page)
>> -			r = migrate_folio_extra(mapping, page_folio(newpage),
>> -						page_folio(page),
>> -						MIGRATE_SYNC_NO_COPY, 1);
>> +		if (migrate && migrate->fault_page == &src->page)
> 
> shouldn't this rather be "page_folio(migrate->fault_page) == src"?
> ie we're looking for two pages from the same folio, rather than the page
> being the same as the head page of the folio?
>
Matthew Wilcox Feb. 16, 2024, 11:19 p.m. UTC | #3
On Fri, Feb 16, 2024 at 02:00:31PM -0800, Sidhartha Kumar wrote:
> On 2/16/24 1:55 PM, Matthew Wilcox wrote:
> > On Fri, Feb 16, 2024 at 01:13:18PM -0800, Sidhartha Kumar wrote:
> > > Use migrate_pfn_to_folio() so we can work with folios directly in
> > > __migrate_device_pages().
> > 
> > i don't understand why this would be correct if we have multipage
> > folios.
> > 
> 
> Alistair mentioned that he is working on order > 0 device page support so I
> was under the impression that currently device pages are only order 0.

That might well be true, but I'm *very* uncomfortable with doing a folio
conversion in core MM that won't work with large folios.  We need to
consider what will happen with large device folios.

(for filesystems, I am less bothered.  Individual filesystems control
whether they see large folios or not, and for lesser filesystems it may
never be worth converting them to support large folios)
Alistair Popple Feb. 19, 2024, 9:49 a.m. UTC | #4
Matthew Wilcox <willy@infradead.org> writes:

> On Fri, Feb 16, 2024 at 02:00:31PM -0800, Sidhartha Kumar wrote:
>> On 2/16/24 1:55 PM, Matthew Wilcox wrote:
>> > On Fri, Feb 16, 2024 at 01:13:18PM -0800, Sidhartha Kumar wrote:
>> > > Use migrate_pfn_to_folio() so we can work with folios directly in
>> > > __migrate_device_pages().
>> > 
>> > i don't understand why this would be correct if we have multipage
>> > folios.
>> > 
>> 
>> Alistair mentioned that he is working on order > 0 device page support so I
>> was under the impression that currently device pages are only order 0.

Right, at the moment we only create order 0 device private pages.

> That might well be true, but I'm *very* uncomfortable with doing a folio
> conversion in core MM that won't work with large folios.  We need to
> consider what will happen with large device folios.

Yes. Perhaps it would be better to post these fixes as part of a series
introducing multi-page folio support for deivce private pages after
all. I don't think we can address your other comments here without first
describing how large folios for device private pages would work, and the
best way to describe that is with the patches.

I doubt I will get those posted this week, but will aim to do so in the
next 2-3 weeks.

> (for filesystems, I am less bothered.  Individual filesystems control
> whether they see large folios or not, and for lesser filesystems it may
> never be worth converting them to support large folios)
diff mbox series

Patch

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 81623f2d72c2b..d9633c7b1d21b 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -687,17 +687,17 @@  static void __migrate_device_pages(unsigned long *src_pfns,
 	bool notified = false;
 
 	for (i = 0; i < npages; i++) {
-		struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
-		struct page *page = migrate_pfn_to_page(src_pfns[i]);
+		struct folio *dst = migrate_pfn_to_folio(dst_pfns[i]);
+		struct folio *src = migrate_pfn_to_folio(src_pfns[i]);
 		struct address_space *mapping;
 		int r;
 
-		if (!newpage) {
+		if (!dst) {
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 			continue;
 		}
 
-		if (!page) {
+		if (!src) {
 			unsigned long addr;
 
 			if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE))
@@ -719,33 +719,29 @@  static void __migrate_device_pages(unsigned long *src_pfns,
 					migrate->pgmap_owner);
 				mmu_notifier_invalidate_range_start(&range);
 			}
-			migrate_vma_insert_page(migrate, addr, newpage,
+			migrate_vma_insert_page(migrate, addr, &dst->page,
 						&src_pfns[i]);
 			continue;
 		}
 
-		mapping = page_mapping(page);
+		mapping = folio_mapping(src);
 
-		if (is_device_private_page(newpage) ||
-		    is_device_coherent_page(newpage)) {
+		if (folio_is_device_private(dst) ||
+		    folio_is_device_coherent(dst)) {
 			if (mapping) {
-				struct folio *folio;
-
-				folio = page_folio(page);
-
 				/*
 				 * For now only support anonymous memory migrating to
 				 * device private or coherent memory.
 				 *
 				 * Try to get rid of swap cache if possible.
 				 */
-				if (!folio_test_anon(folio) ||
-				    !folio_free_swap(folio)) {
+				if (!folio_test_anon(src) ||
+				    !folio_free_swap(src)) {
 					src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 					continue;
 				}
 			}
-		} else if (is_zone_device_page(newpage)) {
+		} else if (folio_is_zone_device(dst)) {
 			/*
 			 * Other types of ZONE_DEVICE page are not supported.
 			 */
@@ -753,13 +749,11 @@  static void __migrate_device_pages(unsigned long *src_pfns,
 			continue;
 		}
 
-		if (migrate && migrate->fault_page == page)
-			r = migrate_folio_extra(mapping, page_folio(newpage),
-						page_folio(page),
-						MIGRATE_SYNC_NO_COPY, 1);
+		if (migrate && migrate->fault_page == &src->page)
+			r = migrate_folio_extra(mapping, dst, src,
+								MIGRATE_SYNC_NO_COPY, 1);
 		else
-			r = migrate_folio(mapping, page_folio(newpage),
-					page_folio(page), MIGRATE_SYNC_NO_COPY);
+			r = migrate_folio(mapping, dst, src, MIGRATE_SYNC_NO_COPY);
 		if (r != MIGRATEPAGE_SUCCESS)
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 	}