Message ID | 20221129193526.3588187-7-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [01/10] mm/hugetlb: Let vma_offset_start() to return start | expand |
On 29.11.22 20:35, Peter Xu wrote: > Since hugetlb_follow_page_mask() walks the pgtable, it needs the vma lock > to make sure the pgtable page will not be freed concurrently. > > Signed-off-by: Peter Xu <peterx@redhat.com> > --- Acked-by: David Hildenbrand <david@redhat.com>
On 11/29/22 14:35, Peter Xu wrote: > Since hugetlb_follow_page_mask() walks the pgtable, it needs the vma lock > to make sure the pgtable page will not be freed concurrently. > > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/hugetlb.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) Thanks! Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 776e34ccf029..d6bb1d22f1c4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6232,9 +6232,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, if (WARN_ON_ONCE(flags & FOLL_PIN)) return NULL; + hugetlb_vma_lock_read(vma); pte = huge_pte_offset(mm, haddr, huge_page_size(h)); if (!pte) - return NULL; + goto out_unlock; ptl = huge_pte_lock(h, mm, pte); entry = huge_ptep_get(pte); @@ -6257,6 +6258,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } out: spin_unlock(ptl); +out_unlock: + hugetlb_vma_unlock_read(vma); return page; }
Since hugetlb_follow_page_mask() walks the pgtable, it needs the vma lock to make sure the pgtable page will not be freed concurrently. Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/hugetlb.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)