Message ID | 20211018045247.3128058-1-apopple@nvidia.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/rmap.c: Avoid double faults migrating device private pages | expand |
On 10/17/21 21:52, Alistair Popple wrote: > During migration special page table entries are installed for each page > being migrated. These entries store the pfn and associated permissions > of ptes mapping the page being migarted. s/migarted/migrated/ > > Device-private pages use special swap pte entries to distinguish > read-only vs. writeable pages which the migration code checks when > creating migration entries. Normally this follows a fast path in > migrate_vma_collect_pmd() which correctly copies the permissions of > device-private pages over to migration entries when migrating pages back > to the CPU. > > However the slow-path falls back to using try_to_migrate() which > unconditionally creates read-only migration entries for device-private > pages. This leads to unnecessary double faults on the CPU as the new > pages are always mapped read-only even when they could be mapped > writeable. Fix this by correctly copying device-private permissions in > try_to_migrate_one(). > > Signed-off-by: Alistair Popple <apopple@nvidia.com> > Reported-by: Ralph Campbell <rcampbell@nvidia.com> > --- > mm/rmap.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) Looks very clearly correct to me. Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks,
diff --git a/mm/rmap.c b/mm/rmap.c index b9eb5c12f3fe..271de8118cdd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1804,6 +1804,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, update_hiwater_rss(mm); if (is_zone_device_page(page)) { + unsigned long pfn = page_to_pfn(page); swp_entry_t entry; pte_t swp_pte; @@ -1812,8 +1813,11 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - entry = make_readable_migration_entry( - page_to_pfn(page)); + entry = pte_to_swp_entry(pteval); + if (is_writable_device_private_entry(entry)) + entry = make_writable_migration_entry(pfn); + else + entry = make_readable_migration_entry(pfn); swp_pte = swp_entry_to_pte(entry); /*
During migration special page table entries are installed for each page being migrated. These entries store the pfn and associated permissions of ptes mapping the page being migarted. Device-private pages use special swap pte entries to distinguish read-only vs. writeable pages which the migration code checks when creating migration entries. Normally this follows a fast path in migrate_vma_collect_pmd() which correctly copies the permissions of device-private pages over to migration entries when migrating pages back to the CPU. However the slow-path falls back to using try_to_migrate() which unconditionally creates read-only migration entries for device-private pages. This leads to unnecessary double faults on the CPU as the new pages are always mapped read-only even when they could be mapped writeable. Fix this by correctly copying device-private permissions in try_to_migrate_one(). Signed-off-by: Alistair Popple <apopple@nvidia.com> Reported-by: Ralph Campbell <rcampbell@nvidia.com> --- mm/rmap.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)