diff mbox series

mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages

Message ID 20221017202505.0e6a4fcd@imladris.surriel.com (mailing list archive)
State New
Headers show
Series mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages | expand

Commit Message

Rik van Riel Oct. 18, 2022, 12:25 a.m. UTC
The h->*_huge_pages counters are protected by the hugetlb_lock, but
alloc_huge_page has a corner case where it can decrement the counter
outside of the lock.

This could lead to a corrupted value of h->resv_huge_pages, which we
have observed on our systems.

Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid
a potential race.

Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count")
Cc: stable@kernel.org
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Glen McCready <gkmccready@meta.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Rik van Riel <riel@surriel.com>
---
 mm/hugetlb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Mike Kravetz Oct. 18, 2022, 2:05 a.m. UTC | #1
On 10/17/22 20:25, Rik van Riel wrote:
> The h->*_huge_pages counters are protected by the hugetlb_lock, but
> alloc_huge_page has a corner case where it can decrement the counter
> outside of the lock.
> 
> This could lead to a corrupted value of h->resv_huge_pages, which we
> have observed on our systems.
> 
> Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid
> a potential race.
> 
> Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count")
> Cc: stable@kernel.org
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Glen McCready <gkmccready@meta.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Muchun Song <songmuchun@bytedance.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Rik van Riel <riel@surriel.com>
> ---
>  mm/hugetlb.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Thanks Rik!  That case did slip through the cracks.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b586cdd75930..dede0337c07c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2924,11 +2924,11 @@  struct page *alloc_huge_page(struct vm_area_struct *vma,
 		page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
 		if (!page)
 			goto out_uncharge_cgroup;
+		spin_lock_irq(&hugetlb_lock);
 		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
 			SetHPageRestoreReserve(page);
 			h->resv_huge_pages--;
 		}
-		spin_lock_irq(&hugetlb_lock);
 		list_add(&page->lru, &h->hugepage_activelist);
 		set_page_refcounted(page);
 		/* Fall through */