Message ID | 20220509062703.64249-2-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | add hugetlb_optimize_vmemmap sysctl | expand |
On 09.05.22 08:27, Muchun Song wrote: > If the size of "struct page" is not the power of two but with the feature > of minimizing overhead of struct page associated with each HugeTLB is > enabled, then the vmemmap pages of HugeTLB will be corrupted after > remapping (panic is about to happen in theory). But this only exists when > !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional > configuration nowadays. So it is not a real word issue, just the result > of a code review. > > But we cannot prevent anyone from configuring that combined configure. > This hugetlb_optimize_vmemmap should be disable in this case to fix this > issue. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 29554c6ef2ae..6254bb2d4ae5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_optimize_vmemmap_enabled()) return; + if (!is_power_of_2(sizeof(struct page))) { + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); + static_branch_disable(&hugetlb_optimize_vmemmap_key); + return; + } + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail