Message ID | 20240417211836.2742593-3-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/hugetlb: Fix missing hugetlb_lock for memcg resv uncharge | expand |
On Wed, Apr 17, 2024 at 2:18 PM Peter Xu <peterx@redhat.com> wrote: > > There is a recent report on UFFDIO_COPY over hugetlb: > > https://lore.kernel.org/all/000000000000ee06de0616177560@google.com/ > > 350: lockdep_assert_held(&hugetlb_lock); > > Should be an issue in hugetlb but triggered in an userfault context, where > it goes into the unlikely path where two threads modifying the resv map > together. Mike has a fix in that path for resv uncharge but it looks like > the locking criteria was overlooked: hugetlb_cgroup_uncharge_folio_rsvd() > will update the cgroup pointer, so it requires to be called with the lock > held. > > Looks like a stable material, so have it copied. > > Reported-by: syzbot+4b8077a5fccc61c385a1@syzkaller.appspotmail.com > Cc: Mina Almasry <almasrymina@google.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: linux-stable <stable@vger.kernel.org> > Fixes: 79aa925bf239 ("hugetlb_cgroup: fix reservation accounting") > Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Mina Almasry <almasrymina@google.com> > --- > mm/hugetlb.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 26ab9dfc7d63..3158a55ce567 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -3247,9 +3247,12 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, > > rsv_adjust = hugepage_subpool_put_pages(spool, 1); > hugetlb_acct_memory(h, -rsv_adjust); > - if (deferred_reserve) > + if (deferred_reserve) { > + spin_lock_irq(&hugetlb_lock); > hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), > pages_per_huge_page(h), folio); > + spin_unlock_irq(&hugetlb_lock); > + } > } > > if (!memcg_charge_ret) > -- > 2.44.0 >
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 26ab9dfc7d63..3158a55ce567 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3247,9 +3247,12 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, rsv_adjust = hugepage_subpool_put_pages(spool, 1); hugetlb_acct_memory(h, -rsv_adjust); - if (deferred_reserve) + if (deferred_reserve) { + spin_lock_irq(&hugetlb_lock); hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), pages_per_huge_page(h), folio); + spin_unlock_irq(&hugetlb_lock); + } } if (!memcg_charge_ret)
There is a recent report on UFFDIO_COPY over hugetlb: https://lore.kernel.org/all/000000000000ee06de0616177560@google.com/ 350: lockdep_assert_held(&hugetlb_lock); Should be an issue in hugetlb but triggered in an userfault context, where it goes into the unlikely path where two threads modifying the resv map together. Mike has a fix in that path for resv uncharge but it looks like the locking criteria was overlooked: hugetlb_cgroup_uncharge_folio_rsvd() will update the cgroup pointer, so it requires to be called with the lock held. Looks like a stable material, so have it copied. Reported-by: syzbot+4b8077a5fccc61c385a1@syzkaller.appspotmail.com Cc: Mina Almasry <almasrymina@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: linux-stable <stable@vger.kernel.org> Fixes: 79aa925bf239 ("hugetlb_cgroup: fix reservation accounting") Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/hugetlb.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)