Message ID | 20190212221400.3512-1-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | huegtlbfs: fix races and page leaks during migration | expand |
On 2/12/19 2:14 PM, Mike Kravetz wrote: > > Hugetlb pages can also be leaked at migration time if the pages are > associated with a file in an explicitly mounted hugetlbfs filesystem. > For example, a test program which hole punches, faults and migrates > pages in such a file (1G in size) will eventually fail because it > can not allocate a page. Reported counts and usage at time of failure: > > node0 > 537 free_hugepages > 1024 nr_hugepages > 0 surplus_hugepages > node1 > 1000 free_hugepages > 1024 nr_hugepages > 0 surplus_hugepages > > Filesystem Size Used Avail Use% Mounted on > nodev 4.0G 4.0G 0 100% /var/opt/hugepool > > Note that the filesystem shows 4G of pages used, while actual usage is > 511 pages (just under 1G). Failed trying to allocate page 512. My apologies. The test scenario described above does not trigger the page leak issue fixed with this patch. It actually triggers another undiagnosed and unfixed issue with huge page migration that I will be working on. Sigh! The leak with migration of huge pages in explicitly mounted filesystem is still fixed by this patch. However, the commit message should be changed to more accurately reflect testing and observed outcomes. The patch with only commit message changes is below: From: Mike Kravetz <mike.kravetz@oracle.com> Date: Tue, 12 Feb 2019 10:58:28 -0800 Subject: [PATCH] huegtlbfs: fix races and page leaks during migration hugetlb pages should only be migrated if they are 'active'. The routines set/clear_page_huge_active() modify the active state of hugetlb pages. When a new hugetlb page is allocated at fault time, set_page_huge_active is called before the page is locked. Therefore, another thread could race and migrate the page while it is being added to page table by the fault code. This race is somewhat hard to trigger, but can be seen by strategically adding udelay to simulate worst case scheduling behavior. Depending on 'how' the code races, various BUG()s could be triggered. To address this issue, simply delay the set_page_huge_active call until after the page is successfully added to the page table. Hugetlb pages can also be leaked at migration time if the pages are associated with a file in an explicitly mounted hugetlbfs filesystem. For example, consider a two node system with 4GB worth of huge pages available. A program mmaps a 2G file in a hugetlbfs filesystem. It then migrates the pages associated with the file from one node to another. When the program exits, huge page counts are as follows: node0 1024 free_hugepages 1024 nr_hugepages node1 0 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool That is as expected. 2G of huge pages are taken from the free_hugepages counts, and 2G is the size of the file in the explicitly mounted filesystem. If the file is then removed, the counts become: node0 1024 free_hugepages 1024 nr_hugepages node1 1024 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool Note that the filesystem still shows 2G of pages used, while there actually are no huge pages in use. The only way to 'fix' the filesystem accounting is to unmount the filesystem If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. Cc: <stable@vger.kernel.org> Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- fs/hugetlbfs/inode.c | 12 ++++++++++++ mm/hugetlb.c | 9 ++++++--- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 32920a10100e..a7fa037b876b 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping, rc = migrate_huge_page_move_mapping(mapping, newpage, page); if (rc != MIGRATEPAGE_SUCCESS) return rc; + + /* + * page_private is subpool pointer in hugetlb pages. Transfer to + * new page. PagePrivate is not associated with page_private for + * hugetlb pages and can not be set here as only page_huge_active + * pages can be migrated. + */ + if (page_private(page)) { + set_page_private(newpage, page_private(page)); + set_page_private(page, 0); + } + if (mode != MIGRATE_SYNC_NO_COPY) migrate_page_copy(newpage, page); else diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a80832487981..f859e319e3eb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3625,7 +3625,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, copy_user_huge_page(new_page, old_page, address, vma, pages_per_huge_page(h)); __SetPageUptodate(new_page); - set_page_huge_active(new_page); mmun_start = haddr; mmun_end = mmun_start + huge_page_size(h); @@ -3647,6 +3646,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, make_huge_pte(vma, new_page, 1)); page_remove_rmap(old_page, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); + set_page_huge_active(new_page); /* Make the old page be freed below */ new_page = old_page; } @@ -3792,7 +3792,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } clear_huge_page(page, address, pages_per_huge_page(h)); __SetPageUptodate(page); - set_page_huge_active(page); if (vma->vm_flags & VM_MAYSHARE) { int err = huge_add_to_page_cache(page, mapping, idx); @@ -3863,6 +3862,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } spin_unlock(ptl); + + /* May already be set if not newly allocated page */ + set_page_huge_active(page); + unlock_page(page); out: return ret; @@ -4097,7 +4100,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, * the set_pte_at() write. */ __SetPageUptodate(page); - set_page_huge_active(page); mapping = dst_vma->vm_file->f_mapping; idx = vma_hugecache_offset(h, dst_vma, dst_addr); @@ -4165,6 +4167,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, update_mmu_cache(dst_vma, dst_addr, dst_pte); spin_unlock(ptl); + set_page_huge_active(page); if (vm_shared) unlock_page(page); ret = 0;
Hi, [This is an automated email] This commit has been processed because it contains a "Fixes:" tag, fixing commit: bcc54222309c mm: hugetlb: introduce page_huge_active. The bot has tested the following trees: v4.20.8, v4.19.21, v4.14.99, v4.9.156, v4.4.174, v3.18.134. v4.20.8: Build OK! v4.19.21: Build OK! v4.14.99: Failed to apply! Possible dependencies: 5b7a1d406062 ("mm, hugetlbfs: rename address to haddr in hugetlb_cow()") v4.9.156: Failed to apply! Possible dependencies: 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") 369cd2121be4 ("userfaultfd: hugetlbfs: userfaultfd_huge_must_wait for hugepmd ranges") 5b7a1d406062 ("mm, hugetlbfs: rename address to haddr in hugetlb_cow()") 7868a2087ec1 ("mm/hugetlb: add size parameter to huge_pte_offset()") 82b0f8c39a38 ("mm: join struct fault_env and vm_fault") 8fb5debc5fcd ("userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for userfaultfd support") 953c66c2b22a ("mm: THP page cache support for ppc64") fd60775aea80 ("mm, thp: avoid unlikely branches for split_huge_pmd") v4.4.174: Failed to apply! Possible dependencies: 09cbfeaf1a5a ("mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros") 0e749e54244e ("dax: increase granularity of dax_clear_blocks() operations") 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") 2a28900be206 ("udf: Export superblock magic to userspace") 4420cfd3f51c ("staging: lustre: format properly all comment blocks for LNet core") 48b4800a1c6a ("zsmalloc: page migration support") 5057dcd0f1aa ("virtio_balloon: export 'available' memory to balloon statistics") 52db400fcd50 ("pmem, dax: clean up clear_pmem()") 5b7a487cf32d ("f2fs: add customized migrate_page callback") 5fd88337d209 ("staging: lustre: fix all conditional comparison to zero in LNet layer") a188222b6ed2 ("net: Rename NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASK") b1123ea6d3b3 ("mm: balloon: use general non-lru movable page feature") b2e0d1625e19 ("dax: fix lifetime of in-kernel dax mappings with dax_map_atomic()") bda807d44454 ("mm: migrate: support non-lru movable page migration") c8b8e32d700f ("direct-io: eliminate the offset argument to ->direct_IO") d1a5f2b4d8a1 ("block: use DAX for partition table reads") e10624f8c097 ("pmem: fail io-requests to known bad blocks") v3.18.134: Failed to apply! Possible dependencies: 0722b1011a5f ("f2fs: set page private for inmemory pages for truncation") 1601839e9e5b ("f2fs: fix to release count of meta page in ->invalidatepage") 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") 31a3268839c1 ("f2fs: cleanup if-statement of phase in gc_data_segment") 34ba94bac938 ("f2fs: do not make dirty any inmemory pages") 34d67debe02b ("f2fs: add infra struct and helper for inline dir") 4634d71ed190 ("f2fs: fix missing kmem_cache_free") 487261f39bcd ("f2fs: merge {invalidate,release}page for meta/node/data pages") 5b7a487cf32d ("f2fs: add customized migrate_page callback") 67298804f344 ("f2fs: introduce struct inode_management to wrap inner fields") 769ec6e5b7d4 ("f2fs: call radix_tree_preload before radix_tree_insert") 7dda2af83b2b ("f2fs: more fast lookup for gc_inode list") 8b26ef98da33 ("f2fs: use rw_semaphore for nat entry lock") 8c402946f074 ("f2fs: introduce the number of inode entries") 9be32d72becc ("f2fs: do retry operations with cond_resched") 9e4ded3f309e ("f2fs: activate f2fs_trace_pid") d5053a34a9cc ("f2fs: introduce -o fastboot for reducing booting time only") e5e7ea3c86e5 ("f2fs: control the memory footprint used by ino entries") f68daeebba5a ("f2fs: keep PagePrivate during releasepage") How should we proceed with this patch? -- Thanks, Sasha
Hi, [This is an automated email] This commit has been processed because it contains a "Fixes:" tag, fixing commit: bcc54222309c mm: hugetlb: introduce page_huge_active. The bot has tested the following trees: v4.20.8, v4.19.21, v4.14.99, v4.9.156, v4.4.174, v3.18.134. v4.20.8: Build OK! v4.19.21: Build OK! v4.14.99: Failed to apply! Possible dependencies: 5b7a1d406062 ("mm, hugetlbfs: rename address to haddr in hugetlb_cow()") v4.9.156: Failed to apply! Possible dependencies: 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") 369cd2121be4 ("userfaultfd: hugetlbfs: userfaultfd_huge_must_wait for hugepmd ranges") 5b7a1d406062 ("mm, hugetlbfs: rename address to haddr in hugetlb_cow()") 7868a2087ec1 ("mm/hugetlb: add size parameter to huge_pte_offset()") 82b0f8c39a38 ("mm: join struct fault_env and vm_fault") 8fb5debc5fcd ("userfaultfd: hugetlbfs: add hugetlb_mcopy_atomic_pte for userfaultfd support") 953c66c2b22a ("mm: THP page cache support for ppc64") fd60775aea80 ("mm, thp: avoid unlikely branches for split_huge_pmd") v4.4.174: Failed to apply! Possible dependencies: 09cbfeaf1a5a ("mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros") 0e749e54244e ("dax: increase granularity of dax_clear_blocks() operations") 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") 2a28900be206 ("udf: Export superblock magic to userspace") 4420cfd3f51c ("staging: lustre: format properly all comment blocks for LNet core") 48b4800a1c6a ("zsmalloc: page migration support") 5057dcd0f1aa ("virtio_balloon: export 'available' memory to balloon statistics") 52db400fcd50 ("pmem, dax: clean up clear_pmem()") 5b7a487cf32d ("f2fs: add customized migrate_page callback") 5fd88337d209 ("staging: lustre: fix all conditional comparison to zero in LNet layer") a188222b6ed2 ("net: Rename NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASK") b1123ea6d3b3 ("mm: balloon: use general non-lru movable page feature") b2e0d1625e19 ("dax: fix lifetime of in-kernel dax mappings with dax_map_atomic()") bda807d44454 ("mm: migrate: support non-lru movable page migration") c8b8e32d700f ("direct-io: eliminate the offset argument to ->direct_IO") d1a5f2b4d8a1 ("block: use DAX for partition table reads") e10624f8c097 ("pmem: fail io-requests to known bad blocks") v3.18.134: Failed to apply! Possible dependencies: 0722b1011a5f ("f2fs: set page private for inmemory pages for truncation") 1601839e9e5b ("f2fs: fix to release count of meta page in ->invalidatepage") 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") 31a3268839c1 ("f2fs: cleanup if-statement of phase in gc_data_segment") 34ba94bac938 ("f2fs: do not make dirty any inmemory pages") 34d67debe02b ("f2fs: add infra struct and helper for inline dir") 4634d71ed190 ("f2fs: fix missing kmem_cache_free") 487261f39bcd ("f2fs: merge {invalidate,release}page for meta/node/data pages") 5b7a487cf32d ("f2fs: add customized migrate_page callback") 67298804f344 ("f2fs: introduce struct inode_management to wrap inner fields") 769ec6e5b7d4 ("f2fs: call radix_tree_preload before radix_tree_insert") 7dda2af83b2b ("f2fs: more fast lookup for gc_inode list") 8b26ef98da33 ("f2fs: use rw_semaphore for nat entry lock") 8c402946f074 ("f2fs: introduce the number of inode entries") 9be32d72becc ("f2fs: do retry operations with cond_resched") 9e4ded3f309e ("f2fs: activate f2fs_trace_pid") d5053a34a9cc ("f2fs: introduce -o fastboot for reducing booting time only") e5e7ea3c86e5 ("f2fs: control the memory footprint used by ino entries") f68daeebba5a ("f2fs: keep PagePrivate during releasepage") How should we proceed with this patch? -- Thanks, Sasha
On Tue, 12 Feb 2019 14:14:00 -0800 Mike Kravetz <mike.kravetz@oracle.com> wrote: > hugetlb pages should only be migrated if they are 'active'. The routines > set/clear_page_huge_active() modify the active state of hugetlb pages. > When a new hugetlb page is allocated at fault time, set_page_huge_active > is called before the page is locked. Therefore, another thread could > race and migrate the page while it is being added to page table by the > fault code. This race is somewhat hard to trigger, but can be seen by > strategically adding udelay to simulate worst case scheduling behavior. > Depending on 'how' the code races, various BUG()s could be triggered. > > To address this issue, simply delay the set_page_huge_active call until > after the page is successfully added to the page table. > > Hugetlb pages can also be leaked at migration time if the pages are > associated with a file in an explicitly mounted hugetlbfs filesystem. > For example, a test program which hole punches, faults and migrates > pages in such a file (1G in size) will eventually fail because it > can not allocate a page. Reported counts and usage at time of failure: > > node0 > 537 free_hugepages > 1024 nr_hugepages > 0 surplus_hugepages > node1 > 1000 free_hugepages > 1024 nr_hugepages > 0 surplus_hugepages > > Filesystem Size Used Avail Use% Mounted on > nodev 4.0G 4.0G 0 100% /var/opt/hugepool > > Note that the filesystem shows 4G of pages used, while actual usage is > 511 pages (just under 1G). Failed trying to allocate page 512. > > If a hugetlb page is associated with an explicitly mounted filesystem, > this information in contained in the page_private field. At migration > time, this information is not preserved. To fix, simply transfer > page_private from old to new page at migration time if necessary. > > Cc: <stable@vger.kernel.org> > Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active") > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> cc:stable. It would be nice to get some review of this one, please? > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping, > rc = migrate_huge_page_move_mapping(mapping, newpage, page); > if (rc != MIGRATEPAGE_SUCCESS) > return rc; > + > + /* > + * page_private is subpool pointer in hugetlb pages. Transfer to > + * new page. PagePrivate is not associated with page_private for > + * hugetlb pages and can not be set here as only page_huge_active > + * pages can be migrated. > + */ > + if (page_private(page)) { > + set_page_private(newpage, page_private(page)); > + set_page_private(page, 0); > + } > + > if (mode != MIGRATE_SYNC_NO_COPY) > migrate_page_copy(newpage, page); > else > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index a80832487981..f859e319e3eb 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -3625,7 +3625,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, > copy_user_huge_page(new_page, old_page, address, vma, > pages_per_huge_page(h)); > __SetPageUptodate(new_page); > - set_page_huge_active(new_page); > > mmun_start = haddr; > mmun_end = mmun_start + huge_page_size(h); > @@ -3647,6 +3646,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, > make_huge_pte(vma, new_page, 1)); > page_remove_rmap(old_page, true); > hugepage_add_new_anon_rmap(new_page, vma, haddr); > + set_page_huge_active(new_page); > /* Make the old page be freed below */ > new_page = old_page; > } > @@ -3792,7 +3792,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > } > clear_huge_page(page, address, pages_per_huge_page(h)); > __SetPageUptodate(page); > - set_page_huge_active(page); > > if (vma->vm_flags & VM_MAYSHARE) { > int err = huge_add_to_page_cache(page, mapping, idx); > @@ -3863,6 +3862,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > } > > spin_unlock(ptl); > + > + /* May already be set if not newly allocated page */ > + set_page_huge_active(page); > + > unlock_page(page); > out: > return ret; > @@ -4097,7 +4100,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, > * the set_pte_at() write. > */ > __SetPageUptodate(page); > - set_page_huge_active(page); > > mapping = dst_vma->vm_file->f_mapping; > idx = vma_hugecache_offset(h, dst_vma, dst_addr); > @@ -4165,6 +4167,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, > update_mmu_cache(dst_vma, dst_addr, dst_pte); > > spin_unlock(ptl); > + set_page_huge_active(page); > if (vm_shared) > unlock_page(page); > ret = 0; > -- > 2.17.2
On 2/20/19 10:09 PM, Andrew Morton wrote: > On Tue, 12 Feb 2019 14:14:00 -0800 Mike Kravetz <mike.kravetz@oracle.com> wrote: > > cc:stable. It would be nice to get some review of this one, please? > >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index a80832487981..f859e319e3eb 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -3625,7 +3625,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, >> copy_user_huge_page(new_page, old_page, address, vma, >> pages_per_huge_page(h)); >> __SetPageUptodate(new_page); >> - set_page_huge_active(new_page); >> >> mmun_start = haddr; >> mmun_end = mmun_start + huge_page_size(h); >> @@ -3647,6 +3646,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, >> make_huge_pte(vma, new_page, 1)); >> page_remove_rmap(old_page, true); >> hugepage_add_new_anon_rmap(new_page, vma, haddr); >> + set_page_huge_active(new_page); >> /* Make the old page be freed below */ >> new_page = old_page; >> } >> @@ -3792,7 +3792,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, >> } >> clear_huge_page(page, address, pages_per_huge_page(h)); >> __SetPageUptodate(page); >> - set_page_huge_active(page); >> >> if (vma->vm_flags & VM_MAYSHARE) { >> int err = huge_add_to_page_cache(page, mapping, idx); >> @@ -3863,6 +3862,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, >> } >> >> spin_unlock(ptl); >> + >> + /* May already be set if not newly allocated page */ >> + set_page_huge_active(page); >> + This is wrong. We need to only set_page_huge_active() for newly allocated pages. Why? We could have got the page from the pagecache, and it could be that the page is !page_huge_active() because it has been isolated for migration. Therefore, we do not want to set it active here. I have also found another race with migration when removing a page from a file. When a huge page is removed from the pagecache, the page_mapping() field is cleared yet page_private continues to point to the subpool until the page is actually freed by free_huge_page(). free_huge_page is what adjusts the counts for the subpool. A page could be migrated while in this state. However, since page_mapping() is not set the hugetlbfs specific routine to transfer page_private is not called and we leak the page count in the filesystem. To fix, check for this condition before migrating a huge page. If the condition is detected, return EBUSY for the page. Both issues are addressed in the updated patch below. Sorry for the churn. As I find and fix one issue I seem to discover another. There is still at least one more issue with private pages when COW comes into play. I continue to work that. I wanted to send this patch earlier as it is pretty easy to hit the bugs if you try. If you would prefer another approach, let me know. From: Mike Kravetz <mike.kravetz@oracle.com> Date: Thu, 21 Feb 2019 11:01:04 -0800 Subject: [PATCH] huegtlbfs: fix races and page leaks during migration hugetlb pages should only be migrated if they are 'active'. The routines set/clear_page_huge_active() modify the active state of hugetlb pages. When a new hugetlb page is allocated at fault time, set_page_huge_active is called before the page is locked. Therefore, another thread could race and migrate the page while it is being added to page table by the fault code. This race is somewhat hard to trigger, but can be seen by strategically adding udelay to simulate worst case scheduling behavior. Depending on 'how' the code races, various BUG()s could be triggered. To address this issue, simply delay the set_page_huge_active call until after the page is successfully added to the page table. Hugetlb pages can also be leaked at migration time if the pages are associated with a file in an explicitly mounted hugetlbfs filesystem. For example, consider a two node system with 4GB worth of huge pages available. A program mmaps a 2G file in a hugetlbfs filesystem. It then migrates the pages associated with the file from one node to another. When the program exits, huge page counts are as follows: node0 1024 free_hugepages 1024 nr_hugepages node1 0 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool That is as expected. 2G of huge pages are taken from the free_hugepages counts, and 2G is the size of the file in the explicitly mounted filesystem. If the file is then removed, the counts become: node0 1024 free_hugepages 1024 nr_hugepages node1 1024 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool Note that the filesystem still shows 2G of pages used, while there actually are no huge pages in use. The only way to 'fix' the filesystem accounting is to unmount the filesystem If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. There is a related race with removing a huge page from a file migration. When a huge page is removed from the pagecache, the page_mapping() field is cleared yet page_private remains set until the page is actually freed by free_huge_page(). A page could be migrated while in this state. However, since page_mapping() is not set the hugetlbfs specific routine to transfer page_private is not called and we leak the page count in the filesystem. To fix, check for this condition before migrating a huge page. If the condition is detected, return EBUSY for the page. Cc: <stable@vger.kernel.org> Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- fs/hugetlbfs/inode.c | 12 ++++++++++++ mm/hugetlb.c | 12 +++++++++--- mm/migrate.c | 11 +++++++++++ 3 files changed, 32 insertions(+), 3 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 32920a10100e..a7fa037b876b 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping, rc = migrate_huge_page_move_mapping(mapping, newpage, page); if (rc != MIGRATEPAGE_SUCCESS) return rc; + + /* + * page_private is subpool pointer in hugetlb pages. Transfer to + * new page. PagePrivate is not associated with page_private for + * hugetlb pages and can not be set here as only page_huge_active + * pages can be migrated. + */ + if (page_private(page)) { + set_page_private(newpage, page_private(page)); + set_page_private(page, 0); + } + if (mode != MIGRATE_SYNC_NO_COPY) migrate_page_copy(newpage, page); else diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a80832487981..e9c92e925b7e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3625,7 +3625,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, copy_user_huge_page(new_page, old_page, address, vma, pages_per_huge_page(h)); __SetPageUptodate(new_page); - set_page_huge_active(new_page); mmun_start = haddr; mmun_end = mmun_start + huge_page_size(h); @@ -3647,6 +3646,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, make_huge_pte(vma, new_page, 1)); page_remove_rmap(old_page, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); + set_page_huge_active(new_page); /* Make the old page be freed below */ new_page = old_page; } @@ -3731,6 +3731,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, pte_t new_pte; spinlock_t *ptl; unsigned long haddr = address & huge_page_mask(h); + bool new_page = false; /* * Currently, we are forced to kill the process in the event the @@ -3792,7 +3793,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } clear_huge_page(page, address, pages_per_huge_page(h)); __SetPageUptodate(page); - set_page_huge_active(page); + new_page = true; if (vma->vm_flags & VM_MAYSHARE) { int err = huge_add_to_page_cache(page, mapping, idx); @@ -3863,6 +3864,11 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } spin_unlock(ptl); + + /* Make newly allocated pages active */ + if (new_page) + set_page_huge_active(page); + unlock_page(page); out: return ret; @@ -4097,7 +4103,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, * the set_pte_at() write. */ __SetPageUptodate(page); - set_page_huge_active(page); mapping = dst_vma->vm_file->f_mapping; idx = vma_hugecache_offset(h, dst_vma, dst_addr); @@ -4165,6 +4170,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, update_mmu_cache(dst_vma, dst_addr, dst_pte); spin_unlock(ptl); + set_page_huge_active(page); if (vm_shared) unlock_page(page); ret = 0; diff --git a/mm/migrate.c b/mm/migrate.c index f7e4bfdc13b7..23d91146052b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1290,6 +1290,16 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, lock_page(hpage); } + /* + * Check for pages which are in the process of being freed. Without + * page_mapping() set, hugetlbfs specific move page routine will not + * be called and we could leak usage counts for subpools. + */ + if (page_private(hpage) && !page_mapping(hpage)) { + rc = -EBUSY; + goto out_unlock; + } + if (PageAnon(hpage)) anon_vma = page_get_anon_vma(hpage); @@ -1320,6 +1330,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, put_new_page = NULL; } +out_unlock: unlock_page(hpage); out: if (rc != -EAGAIN)
On Thu, 21 Feb 2019 11:11:06 -0800 Mike Kravetz <mike.kravetz@oracle.com> wrote: > > Sorry for the churn. As I find and fix one issue I seem to discover another. > There is still at least one more issue with private pages when COW comes into > play. I continue to work that. I wanted to send this patch earlier as it > is pretty easy to hit the bugs if you try. If you would prefer another > approach, let me know. > No probs, the bug doesn't seem to be causing a lot of bother out there and it's cc:stable; there's time to get this right ;) Here's the delta I queued: --- a/mm/hugetlb.c~huegtlbfs-fix-races-and-page-leaks-during-migration-update +++ a/mm/hugetlb.c @@ -3729,6 +3729,7 @@ static vm_fault_t hugetlb_no_page(struct pte_t new_pte; spinlock_t *ptl; unsigned long haddr = address & huge_page_mask(h); + bool new_page = false; /* * Currently, we are forced to kill the process in the event the @@ -3790,6 +3791,7 @@ retry: } clear_huge_page(page, address, pages_per_huge_page(h)); __SetPageUptodate(page); + new_page = true; if (vma->vm_flags & VM_MAYSHARE) { int err = huge_add_to_page_cache(page, mapping, idx); @@ -3861,8 +3863,9 @@ retry: spin_unlock(ptl); - /* May already be set if not newly allocated page */ - set_page_huge_active(page); + /* Make newly allocated pages active */ + if (new_page) + set_page_huge_active(page); unlock_page(page); out: --- a/mm/migrate.c~huegtlbfs-fix-races-and-page-leaks-during-migration-update +++ a/mm/migrate.c @@ -1315,6 +1315,16 @@ static int unmap_and_move_huge_page(new_ lock_page(hpage); } + /* + * Check for pages which are in the process of being freed. Without + * page_mapping() set, hugetlbfs specific move page routine will not + * be called and we could leak usage counts for subpools. + */ + if (page_private(hpage) && !page_mapping(hpage)) { + rc = -EBUSY; + goto out_unlock; + } + if (PageAnon(hpage)) anon_vma = page_get_anon_vma(hpage); @@ -1345,6 +1355,7 @@ put_anon: put_new_page = NULL; } +out_unlock: unlock_page(hpage); out: if (rc != -EAGAIN)
Hi Mike, On Thu, Feb 21, 2019 at 11:11:06AM -0800, Mike Kravetz wrote: > On 2/20/19 10:09 PM, Andrew Morton wrote: > > On Tue, 12 Feb 2019 14:14:00 -0800 Mike Kravetz <mike.kravetz@oracle.com> wrote: > >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c > >> index a80832487981..f859e319e3eb 100644 > >> --- a/mm/hugetlb.c > >> +++ b/mm/hugetlb.c ... > >> @@ -3863,6 +3862,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > >> } > >> > >> spin_unlock(ptl); > >> + > >> + /* May already be set if not newly allocated page */ > >> + set_page_huge_active(page); > >> + > > This is wrong. We need to only set_page_huge_active() for newly allocated > pages. Why? We could have got the page from the pagecache, and it could > be that the page is !page_huge_active() because it has been isolated for > migration. Therefore, we do not want to set it active here. > > I have also found another race with migration when removing a page from > a file. When a huge page is removed from the pagecache, the page_mapping() > field is cleared yet page_private continues to point to the subpool until > the page is actually freed by free_huge_page(). free_huge_page is what > adjusts the counts for the subpool. A page could be migrated while in this > state. However, since page_mapping() is not set the hugetlbfs specific > routine to transfer page_private is not called and we leak the page count > in the filesystem. To fix, check for this condition before migrating a huge > page. If the condition is detected, return EBUSY for the page. > > Both issues are addressed in the updated patch below. > > Sorry for the churn. As I find and fix one issue I seem to discover another. > There is still at least one more issue with private pages when COW comes into > play. I continue to work that. I wanted to send this patch earlier as it > is pretty easy to hit the bugs if you try. If you would prefer another > approach, let me know. > > From: Mike Kravetz <mike.kravetz@oracle.com> > Date: Thu, 21 Feb 2019 11:01:04 -0800 > Subject: [PATCH] huegtlbfs: fix races and page leaks during migration Subject still contains a typo. > > hugetlb pages should only be migrated if they are 'active'. The routines > set/clear_page_huge_active() modify the active state of hugetlb pages. > When a new hugetlb page is allocated at fault time, set_page_huge_active > is called before the page is locked. Therefore, another thread could > race and migrate the page while it is being added to page table by the > fault code. This race is somewhat hard to trigger, but can be seen by > strategically adding udelay to simulate worst case scheduling behavior. > Depending on 'how' the code races, various BUG()s could be triggered. > > To address this issue, simply delay the set_page_huge_active call until > after the page is successfully added to the page table. > > Hugetlb pages can also be leaked at migration time if the pages are > associated with a file in an explicitly mounted hugetlbfs filesystem. > For example, consider a two node system with 4GB worth of huge pages > available. A program mmaps a 2G file in a hugetlbfs filesystem. It > then migrates the pages associated with the file from one node to > another. When the program exits, huge page counts are as follows: > > node0 > 1024 free_hugepages > 1024 nr_hugepages > > node1 > 0 free_hugepages > 1024 nr_hugepages > > Filesystem Size Used Avail Use% Mounted on > nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool > > That is as expected. 2G of huge pages are taken from the free_hugepages > counts, and 2G is the size of the file in the explicitly mounted filesystem. > If the file is then removed, the counts become: > > node0 > 1024 free_hugepages > 1024 nr_hugepages > > node1 > 1024 free_hugepages > 1024 nr_hugepages > > Filesystem Size Used Avail Use% Mounted on > nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool > > Note that the filesystem still shows 2G of pages used, while there > actually are no huge pages in use. The only way to 'fix' the > filesystem accounting is to unmount the filesystem > > If a hugetlb page is associated with an explicitly mounted filesystem, > this information in contained in the page_private field. At migration > time, this information is not preserved. To fix, simply transfer > page_private from old to new page at migration time if necessary. > > There is a related race with removing a huge page from a file migration. > When a huge page is removed from the pagecache, the page_mapping() field > is cleared yet page_private remains set until the page is actually freed > by free_huge_page(). A page could be migrated while in this state. > However, since page_mapping() is not set the hugetlbfs specific routine > to transfer page_private is not called and we leak the page count in the > filesystem. To fix, check for this condition before migrating a huge > page. If the condition is detected, return EBUSY for the page. > > Cc: <stable@vger.kernel.org> > Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active") > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > --- > fs/hugetlbfs/inode.c | 12 ++++++++++++ > mm/hugetlb.c | 12 +++++++++--- > mm/migrate.c | 11 +++++++++++ > 3 files changed, 32 insertions(+), 3 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index 32920a10100e..a7fa037b876b 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space > *mapping, > rc = migrate_huge_page_move_mapping(mapping, newpage, page); > if (rc != MIGRATEPAGE_SUCCESS) > return rc; > + > + /* > + * page_private is subpool pointer in hugetlb pages. Transfer to > + * new page. PagePrivate is not associated with page_private for > + * hugetlb pages and can not be set here as only page_huge_active > + * pages can be migrated. > + */ > + if (page_private(page)) { > + set_page_private(newpage, page_private(page)); > + set_page_private(page, 0); > + } > + > if (mode != MIGRATE_SYNC_NO_COPY) > migrate_page_copy(newpage, page); > else > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index a80832487981..e9c92e925b7e 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c ... > @@ -3863,6 +3864,11 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > } > > spin_unlock(ptl); > + > + /* Make newly allocated pages active */ You already have a perfect explanation about why we need this "if", > ... We could have got the page from the pagecache, and it could > be that the page is !page_huge_active() because it has been isolated for > migration. so you could improve this comment with it. Anyway, I agree to what/how you try to fix. Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Thanks, Naoya Horiguchi
On 2/25/19 11:44 PM, Naoya Horiguchi wrote: > Hi Mike, > > On Thu, Feb 21, 2019 at 11:11:06AM -0800, Mike Kravetz wrote: ... >> From: Mike Kravetz <mike.kravetz@oracle.com> >> Date: Thu, 21 Feb 2019 11:01:04 -0800 >> Subject: [PATCH] huegtlbfs: fix races and page leaks during migration > > Subject still contains a typo. Yes >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c > ... >> @@ -3863,6 +3864,11 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, >> } >> >> spin_unlock(ptl); >> + >> + /* Make newly allocated pages active */ > > You already have a perfect explanation about why we need this "if", > > > ... We could have got the page from the pagecache, and it could > > be that the page is !page_huge_active() because it has been isolated for > > migration. > > so you could improve this comment with it. You are correct, the explanation in the commit message should be in the comment. > Anyway, I agree to what/how you try to fix. > > Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Thank you for reviewing! Andrew, I am not sure if this helps but I have updated the patch and included below. Changes are: - Rebased on v5.0-rc6, so some context is different. - Fixed subject typo and improved comment as suggested by Naoya - Reformatted a couple paragraphs in commit message that had too long lines If you prefer something else, let me know. From: Mike Kravetz <mike.kravetz@oracle.com> Date: Tue, 26 Feb 2019 14:19:36 -0800 Subject: [PATCH] hugetlbfs: fix races and page leaks during migration hugetlb pages should only be migrated if they are 'active'. The routines set/clear_page_huge_active() modify the active state of hugetlb pages. When a new hugetlb page is allocated at fault time, set_page_huge_active is called before the page is locked. Therefore, another thread could race and migrate the page while it is being added to page table by the fault code. This race is somewhat hard to trigger, but can be seen by strategically adding udelay to simulate worst case scheduling behavior. Depending on 'how' the code races, various BUG()s could be triggered. To address this issue, simply delay the set_page_huge_active call until after the page is successfully added to the page table. Hugetlb pages can also be leaked at migration time if the pages are associated with a file in an explicitly mounted hugetlbfs filesystem. For example, consider a two node system with 4GB worth of huge pages available. A program mmaps a 2G file in a hugetlbfs filesystem. It then migrates the pages associated with the file from one node to another. When the program exits, huge page counts are as follows: node0 1024 free_hugepages 1024 nr_hugepages node1 0 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool That is as expected. 2G of huge pages are taken from the free_hugepages counts, and 2G is the size of the file in the explicitly mounted filesystem. If the file is then removed, the counts become: node0 1024 free_hugepages 1024 nr_hugepages node1 1024 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool Note that the filesystem still shows 2G of pages used, while there actually are no huge pages in use. The only way to 'fix' the filesystem accounting is to unmount the filesystem If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. There is a related race with removing a huge page from a file and migration. When a huge page is removed from the pagecache, the page_mapping() field is cleared, yet page_private remains set until the page is actually freed by free_huge_page(). A page could be migrated while in this state. However, since page_mapping() is not set the hugetlbfs specific routine to transfer page_private is not called and we leak the page count in the filesystem. To fix, check for this condition before migrating a huge page. If the condition is detected, return EBUSY for the page. Cc: <stable@vger.kernel.org> Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> --- fs/hugetlbfs/inode.c | 12 ++++++++++++ mm/hugetlb.c | 16 +++++++++++++--- mm/migrate.c | 11 +++++++++++ 3 files changed, 36 insertions(+), 3 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 32920a10100e..a7fa037b876b 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping, rc = migrate_huge_page_move_mapping(mapping, newpage, page); if (rc != MIGRATEPAGE_SUCCESS) return rc; + + /* + * page_private is subpool pointer in hugetlb pages. Transfer to + * new page. PagePrivate is not associated with page_private for + * hugetlb pages and can not be set here as only page_huge_active + * pages can be migrated. + */ + if (page_private(page)) { + set_page_private(newpage, page_private(page)); + set_page_private(page, 0); + } + if (mode != MIGRATE_SYNC_NO_COPY) migrate_page_copy(newpage, page); else diff --git a/mm/hugetlb.c b/mm/hugetlb.c index afef61656c1e..8dfdffc34a99 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3624,7 +3624,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, copy_user_huge_page(new_page, old_page, address, vma, pages_per_huge_page(h)); __SetPageUptodate(new_page); - set_page_huge_active(new_page); mmu_notifier_range_init(&range, mm, haddr, haddr + huge_page_size(h)); mmu_notifier_invalidate_range_start(&range); @@ -3645,6 +3644,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, make_huge_pte(vma, new_page, 1)); page_remove_rmap(old_page, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); + set_page_huge_active(new_page); /* Make the old page be freed below */ new_page = old_page; } @@ -3729,6 +3729,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, pte_t new_pte; spinlock_t *ptl; unsigned long haddr = address & huge_page_mask(h); + bool new_page = false; /* * Currently, we are forced to kill the process in the event the @@ -3790,7 +3791,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } clear_huge_page(page, address, pages_per_huge_page(h)); __SetPageUptodate(page); - set_page_huge_active(page); + new_page = true; if (vma->vm_flags & VM_MAYSHARE) { int err = huge_add_to_page_cache(page, mapping, idx); @@ -3861,6 +3862,15 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } spin_unlock(ptl); + + /* + * Only make newly allocated pages active. Existing pages found + * in the pagecache could be !page_huge_active() if they have been + * isolated for migration. + */ + if (new_page) + set_page_huge_active(page); + unlock_page(page); out: return ret; @@ -4095,7 +4105,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, * the set_pte_at() write. */ __SetPageUptodate(page); - set_page_huge_active(page); mapping = dst_vma->vm_file->f_mapping; idx = vma_hugecache_offset(h, dst_vma, dst_addr); @@ -4163,6 +4172,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, update_mmu_cache(dst_vma, dst_addr, dst_pte); spin_unlock(ptl); + set_page_huge_active(page); if (vm_shared) unlock_page(page); ret = 0; diff --git a/mm/migrate.c b/mm/migrate.c index d4fd680be3b0..181f5d2718a9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1315,6 +1315,16 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, lock_page(hpage); } + /* + * Check for pages which are in the process of being freed. Without + * page_mapping() set, hugetlbfs specific move page routine will not + * be called and we could leak usage counts for subpools. + */ + if (page_private(hpage) && !page_mapping(hpage)) { + rc = -EBUSY; + goto out_unlock; + } + if (PageAnon(hpage)) anon_vma = page_get_anon_vma(hpage); @@ -1345,6 +1355,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, put_new_page = NULL; } +out_unlock: unlock_page(hpage); out: if (rc != -EAGAIN)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 32920a10100e..a7fa037b876b 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping, rc = migrate_huge_page_move_mapping(mapping, newpage, page); if (rc != MIGRATEPAGE_SUCCESS) return rc; + + /* + * page_private is subpool pointer in hugetlb pages. Transfer to + * new page. PagePrivate is not associated with page_private for + * hugetlb pages and can not be set here as only page_huge_active + * pages can be migrated. + */ + if (page_private(page)) { + set_page_private(newpage, page_private(page)); + set_page_private(page, 0); + } + if (mode != MIGRATE_SYNC_NO_COPY) migrate_page_copy(newpage, page); else diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a80832487981..f859e319e3eb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3625,7 +3625,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, copy_user_huge_page(new_page, old_page, address, vma, pages_per_huge_page(h)); __SetPageUptodate(new_page); - set_page_huge_active(new_page); mmun_start = haddr; mmun_end = mmun_start + huge_page_size(h); @@ -3647,6 +3646,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, make_huge_pte(vma, new_page, 1)); page_remove_rmap(old_page, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); + set_page_huge_active(new_page); /* Make the old page be freed below */ new_page = old_page; } @@ -3792,7 +3792,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } clear_huge_page(page, address, pages_per_huge_page(h)); __SetPageUptodate(page); - set_page_huge_active(page); if (vma->vm_flags & VM_MAYSHARE) { int err = huge_add_to_page_cache(page, mapping, idx); @@ -3863,6 +3862,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, } spin_unlock(ptl); + + /* May already be set if not newly allocated page */ + set_page_huge_active(page); + unlock_page(page); out: return ret; @@ -4097,7 +4100,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, * the set_pte_at() write. */ __SetPageUptodate(page); - set_page_huge_active(page); mapping = dst_vma->vm_file->f_mapping; idx = vma_hugecache_offset(h, dst_vma, dst_addr); @@ -4165,6 +4167,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, update_mmu_cache(dst_vma, dst_addr, dst_pte); spin_unlock(ptl); + set_page_huge_active(page); if (vm_shared) unlock_page(page); ret = 0;
hugetlb pages should only be migrated if they are 'active'. The routines set/clear_page_huge_active() modify the active state of hugetlb pages. When a new hugetlb page is allocated at fault time, set_page_huge_active is called before the page is locked. Therefore, another thread could race and migrate the page while it is being added to page table by the fault code. This race is somewhat hard to trigger, but can be seen by strategically adding udelay to simulate worst case scheduling behavior. Depending on 'how' the code races, various BUG()s could be triggered. To address this issue, simply delay the set_page_huge_active call until after the page is successfully added to the page table. Hugetlb pages can also be leaked at migration time if the pages are associated with a file in an explicitly mounted hugetlbfs filesystem. For example, a test program which hole punches, faults and migrates pages in such a file (1G in size) will eventually fail because it can not allocate a page. Reported counts and usage at time of failure: node0 537 free_hugepages 1024 nr_hugepages 0 surplus_hugepages node1 1000 free_hugepages 1024 nr_hugepages 0 surplus_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 4.0G 0 100% /var/opt/hugepool Note that the filesystem shows 4G of pages used, while actual usage is 511 pages (just under 1G). Failed trying to allocate page 512. If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. Cc: <stable@vger.kernel.org> Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- fs/hugetlbfs/inode.c | 12 ++++++++++++ mm/hugetlb.c | 9 ++++++--- 2 files changed, 18 insertions(+), 3 deletions(-)