diff mbox series

mm, migrate: Check return value of try_to_unmap()

Message ID 20200303033645.280694-1-ying.huang@intel.com (mailing list archive)
State New, archived
Headers show
Series mm, migrate: Check return value of try_to_unmap() | expand

Commit Message

Huang, Ying March 3, 2020, 3:36 a.m. UTC
From: Huang Ying <ying.huang@intel.com>

During the page migration, try_to_unmap() is called to replace the
page table entries with the migration entries.  Now its return value
is ignored, that is generally OK in most cases.  But in theory, it's
possible that try_to_unmap() return false (failure) for the page
migration after arch_unmap_one() is called in unmap code.  Even if
without arch_unmap_one(), it makes code more robust for the future
code changing to check the return value.

Known issue: I don't know what is the appropriate error code for
try_to_unmap() failure.  Whether EIO is OK?

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hugh Dickins <hughd@google.com>
---
 mm/migrate.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

Comments

Hugh Dickins March 3, 2020, 6:11 a.m. UTC | #1
On Tue, 3 Mar 2020, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> During the page migration, try_to_unmap() is called to replace the
> page table entries with the migration entries.  Now its return value
> is ignored, that is generally OK in most cases.  But in theory, it's
> possible that try_to_unmap() return false (failure) for the page
> migration after arch_unmap_one() is called in unmap code.  Even if
> without arch_unmap_one(), it makes code more robust for the future
> code changing to check the return value.

No. This patch serves no purpose, and introduces a bug.

More robust? Robustness is provided by the later expected_count
checks, with freezing where necessary.

Saving time by not going further if try_to_unmap() failed?
That's done by the page_mapped() checks immediately following
the try_to_unmap() calls (just out of sight in two cases).

> 
> Known issue: I don't know what is the appropriate error code for
> try_to_unmap() failure.  Whether EIO is OK?

-EAGAIN is the rc already being used for this case,
and which indicates that retries may be appropriate
(but they're often scary overkill).

> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Hugh Dickins <hughd@google.com>
> ---
>  mm/migrate.c | 18 ++++++++++++------
>  1 file changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 3900044cfaa6..981f8374a6ef 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1116,8 +1116,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>  		/* Establish migration ptes */
>  		VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
>  				page);
> -		try_to_unmap(page,
> -			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
> +		if (!try_to_unmap(page,
> +			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS)) {
> +			rc = -EIO;
> +			goto out_unlock_both;

No: even if try_to_unmap() says that it did not entirely succeed,
it may have unmapped some ptes, inserting migration entries in their
place. Those need to be replaced by proper ptes before the page is
unlocked, which page_was_mapped 1 and remove_migration_ptes() do;
but this goto skips those.
 
> +		}
>  		page_was_mapped = 1;
>  	}
>  
> @@ -1337,8 +1340,11 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
>  		goto put_anon;
>  
>  	if (page_mapped(hpage)) {
> -		try_to_unmap(hpage,
> -			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
> +		if (!try_to_unmap(hpage,
> +			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS)) {
> +			rc = -EIO;
> +			goto unlock_both;

Same as above.

> +		}
>  		page_was_mapped = 1;
>  	}
>  
> @@ -1349,6 +1355,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
>  		remove_migration_ptes(hpage,
>  			rc == MIGRATEPAGE_SUCCESS ? new_hpage : hpage, false);
>  
> +unlock_both:
>  	unlock_page(new_hpage);
>  
>  put_anon:
> @@ -2558,8 +2565,7 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
>  			continue;
>  
>  		if (page_mapped(page)) {
> -			try_to_unmap(page, flags);
> -			if (page_mapped(page))
> +			if (!try_to_unmap(page, flags))
>  				goto restore;
>  		}
>  
> -- 
> 2.25.0
Huang, Ying March 3, 2020, 6:19 a.m. UTC | #2
Hi, Hugh,

Hugh Dickins <hughd@google.com> writes:
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 3900044cfaa6..981f8374a6ef 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1116,8 +1116,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>>  		/* Establish migration ptes */
>>  		VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
>>  				page);
>> -		try_to_unmap(page,
>> -			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
>> +		if (!try_to_unmap(page,
>> +			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS)) {
>> +			rc = -EIO;
>> +			goto out_unlock_both;
>
> No: even if try_to_unmap() says that it did not entirely succeed,
> it may have unmapped some ptes, inserting migration entries in their
> place. Those need to be replaced by proper ptes before the page is
> unlocked, which page_was_mapped 1 and remove_migration_ptes() do;
> but this goto skips those.

Yes.  You are right.  I misunderstand the original code.  Please ignore
this patch.

Best Regards,
Huang, Ying
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 3900044cfaa6..981f8374a6ef 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1116,8 +1116,11 @@  static int __unmap_and_move(struct page *page, struct page *newpage,
 		/* Establish migration ptes */
 		VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma,
 				page);
-		try_to_unmap(page,
-			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
+		if (!try_to_unmap(page,
+			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS)) {
+			rc = -EIO;
+			goto out_unlock_both;
+		}
 		page_was_mapped = 1;
 	}
 
@@ -1337,8 +1340,11 @@  static int unmap_and_move_huge_page(new_page_t get_new_page,
 		goto put_anon;
 
 	if (page_mapped(hpage)) {
-		try_to_unmap(hpage,
-			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
+		if (!try_to_unmap(hpage,
+			TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS)) {
+			rc = -EIO;
+			goto unlock_both;
+		}
 		page_was_mapped = 1;
 	}
 
@@ -1349,6 +1355,7 @@  static int unmap_and_move_huge_page(new_page_t get_new_page,
 		remove_migration_ptes(hpage,
 			rc == MIGRATEPAGE_SUCCESS ? new_hpage : hpage, false);
 
+unlock_both:
 	unlock_page(new_hpage);
 
 put_anon:
@@ -2558,8 +2565,7 @@  static void migrate_vma_unmap(struct migrate_vma *migrate)
 			continue;
 
 		if (page_mapped(page)) {
-			try_to_unmap(page, flags);
-			if (page_mapped(page))
+			if (!try_to_unmap(page, flags))
 				goto restore;
 		}