Message ID | 20210114114435.40075-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hugetlb: Use helper huge_page_order and pages_per_huge_page | expand |
On 14.01.21 12:44, Miaohe Lin wrote: > Since commit a5516438959d ("hugetlb: modular state for hugetlb page size"), > we can use huge_page_order to access hstate->order and pages_per_huge_page > to fetch the pages per huge page. But gather_bootmem_prealloc() forgot to > use it. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/hugetlb.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index fe2da9ad6233..c04d922757c7 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2476,7 +2476,7 @@ static void __init gather_bootmem_prealloc(void) > struct hstate *h = m->hstate; > > WARN_ON(page_count(page) != 1); > - prep_compound_huge_page(page, h->order); > + prep_compound_huge_page(page, huge_page_order(h)); > WARN_ON(PageReserved(page)); > prep_new_huge_page(h, page, page_to_nid(page)); > put_page(page); /* free it into the hugepage allocator */ > @@ -2488,7 +2488,7 @@ static void __init gather_bootmem_prealloc(void) > * side-effects, like CommitLimit going negative. > */ > if (hstate_is_gigantic(h)) > - adjust_managed_page_count(page, 1 << h->order); > + adjust_managed_page_count(page, pages_per_huge_page(h)); > cond_resched(); > } > } > Reviewed-by: David Hildenbrand <david@redhat.com>
On 1/14/21 3:44 AM, Miaohe Lin wrote: > Since commit a5516438959d ("hugetlb: modular state for hugetlb page size"), > we can use huge_page_order to access hstate->order and pages_per_huge_page > to fetch the pages per huge page. But gather_bootmem_prealloc() forgot to > use it. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/hugetlb.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) Thanks, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index fe2da9ad6233..c04d922757c7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2476,7 +2476,7 @@ static void __init gather_bootmem_prealloc(void) struct hstate *h = m->hstate; WARN_ON(page_count(page) != 1); - prep_compound_huge_page(page, h->order); + prep_compound_huge_page(page, huge_page_order(h)); WARN_ON(PageReserved(page)); prep_new_huge_page(h, page, page_to_nid(page)); put_page(page); /* free it into the hugepage allocator */ @@ -2488,7 +2488,7 @@ static void __init gather_bootmem_prealloc(void) * side-effects, like CommitLimit going negative. */ if (hstate_is_gigantic(h)) - adjust_managed_page_count(page, 1 << h->order); + adjust_managed_page_count(page, pages_per_huge_page(h)); cond_resched(); } }
Since commit a5516438959d ("hugetlb: modular state for hugetlb page size"), we can use huge_page_order to access hstate->order and pages_per_huge_page to fetch the pages per huge page. But gather_bootmem_prealloc() forgot to use it. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/hugetlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)