diff mbox series

[v2,9/9] hugetlb: clean up code checking for fault/truncation races

Message ID 20220914221810.95771-10-mike.kravetz@oracle.com (mailing list archive)
State New
Headers show
Series hugetlb: Use new vma lock for huge pmd sharing synchronization | expand

Commit Message

Mike Kravetz Sept. 14, 2022, 10:18 p.m. UTC
With the new hugetlb vma lock in place, it can also be used to handle
page fault races with file truncation.  The lock is taken at the
beginning of the code fault path in read mode.  During truncation, it
is taken in write mode for each vma which has the file mapped.  The
file's size (i_size) is modified before taking the vma lock to unmap.

How are races handled?

The page fault code checks i_size early in processing after taking the
vma lock.  If the fault is beyond i_size, the fault is aborted.  If the
fault is not beyond i_size the fault will continue and a new page will
be added to the file.  It could be that truncation code modifies i_size
after the check in fault code.  That is OK, as truncation code will soon
remove the page.  The truncation code will wait until the fault is
finished, as it must obtain the vma lock in write mode.

This patch cleans up/removes late checks in the fault paths that try to
back out pages racing with truncation.  As noted above, we just let the
truncation code remove the pages.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 fs/hugetlbfs/inode.c | 31 ++++++++++++-------------------
 mm/hugetlb.c         | 27 ++++++---------------------
 2 files changed, 18 insertions(+), 40 deletions(-)

Comments

Mike Kravetz Sept. 19, 2022, 11:32 p.m. UTC | #1
On 09/14/22 15:18, Mike Kravetz wrote:
> With the new hugetlb vma lock in place, it can also be used to handle
> page fault races with file truncation.  The lock is taken at the
> beginning of the code fault path in read mode.  During truncation, it
> is taken in write mode for each vma which has the file mapped.  The
> file's size (i_size) is modified before taking the vma lock to unmap.
> 
> How are races handled?
> 
> The page fault code checks i_size early in processing after taking the
> vma lock.  If the fault is beyond i_size, the fault is aborted.  If the
> fault is not beyond i_size the fault will continue and a new page will
> be added to the file.  It could be that truncation code modifies i_size
> after the check in fault code.  That is OK, as truncation code will soon
> remove the page.  The truncation code will wait until the fault is
> finished, as it must obtain the vma lock in write mode.
> 
> This patch cleans up/removes late checks in the fault paths that try to
> back out pages racing with truncation.  As noted above, we just let the
> truncation code remove the pages.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  fs/hugetlbfs/inode.c | 31 ++++++++++++-------------------
>  mm/hugetlb.c         | 27 ++++++---------------------
>  2 files changed, 18 insertions(+), 40 deletions(-)

This patch introduced a compiler warning addressed here,

https://lore.kernel.org/linux-mm/Yyj7HsJWfHDoU24U@monkey/
Miaohe Lin Sept. 29, 2022, 6:25 a.m. UTC | #2
On 2022/9/15 6:18, Mike Kravetz wrote:
> With the new hugetlb vma lock in place, it can also be used to handle
> page fault races with file truncation.  The lock is taken at the
> beginning of the code fault path in read mode.  During truncation, it
> is taken in write mode for each vma which has the file mapped.  The
> file's size (i_size) is modified before taking the vma lock to unmap.
> 
> How are races handled?
> 
> The page fault code checks i_size early in processing after taking the
> vma lock.  If the fault is beyond i_size, the fault is aborted.  If the
> fault is not beyond i_size the fault will continue and a new page will
> be added to the file.  It could be that truncation code modifies i_size
> after the check in fault code.  That is OK, as truncation code will soon
> remove the page.  The truncation code will wait until the fault is
> finished, as it must obtain the vma lock in write mode.

As previous thread [1] points out, if vma->vm_private_data is NULL, there won't be any synchronization
which provides the same type of synchronization around i_size as provided by the fault mutex as in [2].
([2] will take the fault mutex for EVERY index in the truncated range)

[1] https://lore.kernel.org/lkml/YyOKIhygl66cG8Yr@monkey/T/#m6b69af9e8cdba01246c2b210bd044bf895b815ee
[2] https://lore.kernel.org/lkml/20220824175757.20590-5-mike.kravetz@oracle.com/

Except from that, this patch looks good to me. Thanks Mike.

Thanks,
Miaohe Lin
diff mbox series

Patch

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 009ae539b9b2..ed57a029eab0 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -568,26 +568,19 @@  static bool remove_inode_single_folio(struct hstate *h, struct inode *inode,
 
 	folio_lock(folio);
 	/*
-	 * After locking page, make sure mapping is the same.
-	 * We could have raced with page fault populate and
-	 * backout code.
+	 * We must remove the folio from page cache before removing
+	 * the region/ reserve map (hugetlb_unreserve_pages).  In
+	 * rare out of memory conditions, removal of the region/reserve
+	 * map could fail.  Correspondingly, the subpool and global
+	 * reserve usage count can need to be adjusted.
 	 */
-	if (folio_mapping(folio) == mapping) {
-		/*
-		 * We must remove the folio from page cache before removing
-		 * the region/ reserve map (hugetlb_unreserve_pages).  In
-		 * rare out of memory conditions, removal of the region/reserve
-		 * map could fail.  Correspondingly, the subpool and global
-		 * reserve usage count can need to be adjusted.
-		 */
-		VM_BUG_ON(HPageRestoreReserve(&folio->page));
-		hugetlb_delete_from_page_cache(&folio->page);
-		ret = true;
-		if (!truncate_op) {
-			if (unlikely(hugetlb_unreserve_pages(inode, index,
-								index + 1, 1)))
-				hugetlb_fix_reserve_counts(inode);
-		}
+	VM_BUG_ON(HPageRestoreReserve(&folio->page));
+	hugetlb_delete_from_page_cache(&folio->page);
+	ret = true;
+	if (!truncate_op) {
+		if (unlikely(hugetlb_unreserve_pages(inode, index,
+							index + 1, 1)))
+			hugetlb_fix_reserve_counts(inode);
 	}
 
 	folio_unlock(folio);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e8cbc0f7cdaa..2207300791e5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5561,6 +5561,7 @@  static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	spinlock_t *ptl;
 	unsigned long haddr = address & huge_page_mask(h);
 	bool new_page, new_pagecache_page = false;
+	bool reserve_alloc = false;
 
 	/*
 	 * Currently, we are forced to kill the process in the event the
@@ -5616,6 +5617,8 @@  static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 		clear_huge_page(page, address, pages_per_huge_page(h));
 		__SetPageUptodate(page);
 		new_page = true;
+		if (HPageRestoreReserve(page))
+			reserve_alloc = true;
 
 		if (vma->vm_flags & VM_MAYSHARE) {
 			int err = hugetlb_add_to_page_cache(page, mapping, idx);
@@ -5679,10 +5682,6 @@  static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	}
 
 	ptl = huge_pte_lock(h, mm, ptep);
-	size = i_size_read(mapping->host) >> huge_page_shift(h);
-	if (idx >= size)
-		goto backout;
-
 	ret = 0;
 	/* If pte changed from under us, retry */
 	if (!pte_same(huge_ptep_get(ptep), old_pte))
@@ -5726,10 +5725,10 @@  static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 backout:
 	spin_unlock(ptl);
 backout_unlocked:
-	unlock_page(page);
-	/* restore reserve for newly allocated pages not in page cache */
 	if (new_page && !new_pagecache_page)
 		restore_reserve_on_error(h, vma, haddr, page);
+
+	unlock_page(page);
 	put_page(page);
 	goto out;
 }
@@ -6061,26 +6060,12 @@  int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
 
 	ptl = huge_pte_lock(h, dst_mm, dst_pte);
 
-	/*
-	 * Recheck the i_size after holding PT lock to make sure not
-	 * to leave any page mapped (as page_mapped()) beyond the end
-	 * of the i_size (remove_inode_hugepages() is strict about
-	 * enforcing that). If we bail out here, we'll also leave a
-	 * page in the radix tree in the vm_shared case beyond the end
-	 * of the i_size, but remove_inode_hugepages() will take care
-	 * of it as soon as we drop the hugetlb_fault_mutex_table.
-	 */
-	size = i_size_read(mapping->host) >> huge_page_shift(h);
-	ret = -EFAULT;
-	if (idx >= size)
-		goto out_release_unlock;
-
-	ret = -EEXIST;
 	/*
 	 * We allow to overwrite a pte marker: consider when both MISSING|WP
 	 * registered, we firstly wr-protect a none pte which has no page cache
 	 * page backing it, then access the page.
 	 */
+	ret = -EEXIST;
 	if (!huge_pte_none_mostly(huge_ptep_get(dst_pte)))
 		goto out_release_unlock;