diff mbox series

[1/2] mm/migrate_device: further convert migrate_device_unmap() to folios

Message ID 20240214202055.77776-1-sidhartha.kumar@oracle.com (mailing list archive)
State New
Headers show
Series [1/2] mm/migrate_device: further convert migrate_device_unmap() to folios | expand

Commit Message

Sidhartha Kumar Feb. 14, 2024, 8:20 p.m. UTC
migrate_device_unmap() already has a folio, we can use the folio
versions of is_zone_device_page() and putback_lru_page.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/migrate_device.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

Comments

Alistair Popple Feb. 14, 2024, 10:38 p.m. UTC | #1
Sidhartha Kumar <sidhartha.kumar@oracle.com> writes:

> migrate_device_unmap() already has a folio, we can use the folio
> versions of is_zone_device_page() and putback_lru_page.
>
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/migrate_device.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index b6c27c76e1a0b..9152a329b0a68 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -377,33 +377,33 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
>  			continue;
>  		}
>  
> +		folio = page_folio(page);

Instead of open coding the migrate pfn to folio conversion I think we
should define a migrate_pfn_to_folio() and get rid of the intermediate
local variable. This would also allow a minor clean up to the final for
loop in migrate_device_unmap().

>  		/* ZONE_DEVICE pages are not on LRU */
> -		if (!is_zone_device_page(page)) {
> -			if (!PageLRU(page) && allow_drain) {
> +		if (!folio_is_zone_device(folio)) {
> +			if (!folio_test_lru(folio) && allow_drain) {
>  				/* Drain CPU's lru cache */
>  				lru_add_drain_all();
>  				allow_drain = false;
>  			}
>  
> -			if (!isolate_lru_page(page)) {
> +			if (!folio_isolate_lru(folio)) {
>  				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
>  				restore++;
>  				continue;
>  			}
>  
>  			/* Drop the reference we took in collect */
> -			put_page(page);
> +			folio_put(folio);
>  		}
>  
> -		folio = page_folio(page);
>  		if (folio_mapped(folio))
>  			try_to_migrate(folio, 0);
>  
> -		if (page_mapped(page) ||
> +		if (folio_mapped(folio) ||
>  		    !migrate_vma_check_page(page, fault_page)) {
> -			if (!is_zone_device_page(page)) {
> -				get_page(page);
> -				putback_lru_page(page);
> +			if (!folio_is_zone_device(folio)) {
> +				folio_get(folio);
> +				folio_putback_lru(folio);
>  			}
>  
>  			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
Matthew Wilcox Feb. 15, 2024, 4:08 a.m. UTC | #2
On Thu, Feb 15, 2024 at 09:38:42AM +1100, Alistair Popple wrote:
> > +++ b/mm/migrate_device.c
> > @@ -377,33 +377,33 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
> >  			continue;
> >  		}
> >  
> > +		folio = page_folio(page);
> 
> Instead of open coding the migrate pfn to folio conversion I think we
> should define a migrate_pfn_to_folio() and get rid of the intermediate
> local variable. This would also allow a minor clean up to the final for
> loop in migrate_device_unmap().

I think we should stop passing pfns into migrate_device_unmap().
Passing an array of folios would make more sense to every function
involved, afaict.  Maybe I overlooked something ...

Also, have you had any thoughts on whether device memory is a type of
folio like anon/file memory, or is it its own type?
Alistair Popple Feb. 16, 2024, 2:21 a.m. UTC | #3
Matthew Wilcox <willy@infradead.org> writes:

> On Thu, Feb 15, 2024 at 09:38:42AM +1100, Alistair Popple wrote:
>> > +++ b/mm/migrate_device.c
>> > @@ -377,33 +377,33 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
>> >  			continue;
>> >  		}
>> >  
>> > +		folio = page_folio(page);
>> 
>> Instead of open coding the migrate pfn to folio conversion I think we
>> should define a migrate_pfn_to_folio() and get rid of the intermediate
>> local variable. This would also allow a minor clean up to the final for
>> loop in migrate_device_unmap().
>
> I think we should stop passing pfns into migrate_device_unmap().
> Passing an array of folios would make more sense to every function
> involved, afaict.  Maybe I overlooked something ...

Note these are migration pfns. The main reason we do this is we need to
track and possibly modify some per-pfn state around between all these
functions during the migration process.

> Also, have you had any thoughts on whether device memory is a type of
> folio like anon/file memory, or is it its own type?

I don't quite follow what the precise distinction there is but I think
of them as normal pages/folios like anon/file memory folios because we
rely on the same kernel paths and rules to manage them (ie. they get
refcounted the same as normal pages, CoWed, etc.). Currently we only
allow these to be mapped into private/anon VMAs but I have an
experiemental series to allow them to be mapped into shared or
filebacked VMAs which basically involves putting them into the
page-cache.

Most drivers also have a 1:1 mapping of struct page to a physical page
of device memory and due to all the folio work it's fairly easy to
extend this to support higher order folios. I will try and post the
first half of my changes that convert all the page based handling to
folios. I got caught up trying figuring out a sane API for
splitting/merging during migration but maybe I should just post the
folio conversion as a simpler first step.
diff mbox series

Patch

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index b6c27c76e1a0b..9152a329b0a68 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -377,33 +377,33 @@  static unsigned long migrate_device_unmap(unsigned long *src_pfns,
 			continue;
 		}
 
+		folio = page_folio(page);
 		/* ZONE_DEVICE pages are not on LRU */
-		if (!is_zone_device_page(page)) {
-			if (!PageLRU(page) && allow_drain) {
+		if (!folio_is_zone_device(folio)) {
+			if (!folio_test_lru(folio) && allow_drain) {
 				/* Drain CPU's lru cache */
 				lru_add_drain_all();
 				allow_drain = false;
 			}
 
-			if (!isolate_lru_page(page)) {
+			if (!folio_isolate_lru(folio)) {
 				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 				restore++;
 				continue;
 			}
 
 			/* Drop the reference we took in collect */
-			put_page(page);
+			folio_put(folio);
 		}
 
-		folio = page_folio(page);
 		if (folio_mapped(folio))
 			try_to_migrate(folio, 0);
 
-		if (page_mapped(page) ||
+		if (folio_mapped(folio) ||
 		    !migrate_vma_check_page(page, fault_page)) {
-			if (!is_zone_device_page(page)) {
-				get_page(page);
-				putback_lru_page(page);
+			if (!folio_is_zone_device(folio)) {
+				folio_get(folio);
+				folio_putback_lru(folio);
 			}
 
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;