Message ID | 20210320093701.12829-6-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Cleanup and fixup for mm/migrate.c | expand |
diff --git a/mm/migrate.c b/mm/migrate.c index 3e169b72d7b2..67a941c52b6d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2192,9 +2192,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, int page_lru = page_is_file_lru(page); unsigned long start = address & HPAGE_PMD_MASK; - if (is_shared_exec_page(vma, page)) - goto out; - new_page = alloc_pages_node(node, (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE), HPAGE_PMD_ORDER); @@ -2306,7 +2303,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, out_unlock: unlock_page(page); -out: put_page(page); return 0; }
Since commit c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA balancing"), the NUMA balancing would skip shared exec transhuge page. But this enhancement is not suitable for transhuge page. Because it's required that page_mapcount() must be 1 due to no migration pte dance is done here. On the other hand, the shared exec transhuge page will leave the migrate_misplaced_page() with pte entry untouched and page locked. Thus pagefault for NUMA will be triggered again and deadlock occurs when we start waiting for the page lock held by ourselves. Fixes: c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA balancing") Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/migrate.c | 4 ---- 1 file changed, 4 deletions(-)