diff mbox series

[V2] mm/page_alloc: Ensure that HUGETLB_PAGE_ORDER is less than MAX_ORDER

Message ID 1618199302-29335-1-git-send-email-anshuman.khandual@arm.com (mailing list archive)
State New, archived
Headers show
Series [V2] mm/page_alloc: Ensure that HUGETLB_PAGE_ORDER is less than MAX_ORDER | expand

Commit Message

Anshuman Khandual April 12, 2021, 3:48 a.m. UTC
pageblock_order must always be less than MAX_ORDER, otherwise it might lead
to an warning during boot. A similar problem got fixed on arm64 platform
with the commit 79cc2ed5a716 ("arm64/mm: Drop THP conditionality from
FORCE_MAX_ZONEORDER"). Assert the above condition before HUGETLB_PAGE_ORDER
gets assigned as pageblock_order. This will help detect the problem earlier
on platforms where HUGETLB_PAGE_SIZE_VARIABLE is enabled.

Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
Changes in V2:

- Changed WARN_ON() to BUILD_BUG_ON() per David

Changes in V1:

https://patchwork.kernel.org/project/linux-mm/patch/1617947717-2424-1-git-send-email-anshuman.khandual@arm.com/

 mm/page_alloc.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

Comments

Anshuman Khandual April 12, 2021, 8:06 a.m. UTC | #1
+ linuxppc-dev@lists.ozlabs.org
+ linux-ia64@vger.kernel.org

On 4/12/21 9:18 AM, Anshuman Khandual wrote:
> pageblock_order must always be less than MAX_ORDER, otherwise it might lead
> to an warning during boot. A similar problem got fixed on arm64 platform
> with the commit 79cc2ed5a716 ("arm64/mm: Drop THP conditionality from
> FORCE_MAX_ZONEORDER"). Assert the above condition before HUGETLB_PAGE_ORDER
> gets assigned as pageblock_order. This will help detect the problem earlier
> on platforms where HUGETLB_PAGE_SIZE_VARIABLE is enabled.
> 
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> Changes in V2:
> 
> - Changed WARN_ON() to BUILD_BUG_ON() per David
> 
> Changes in V1:
> 
> https://patchwork.kernel.org/project/linux-mm/patch/1617947717-2424-1-git-send-email-anshuman.khandual@arm.com/
> 
>  mm/page_alloc.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index cfc72873961d..19283bff4bec 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6875,10 +6875,17 @@ void __init set_pageblock_order(void)
>  	if (pageblock_order)
>  		return;
>  
> -	if (HPAGE_SHIFT > PAGE_SHIFT)
> +	if (HPAGE_SHIFT > PAGE_SHIFT) {
> +		/*
> +		 * pageblock_order must always be less than
> +		 * MAX_ORDER. So does HUGETLB_PAGE_ORDER if
> +		 * that is being assigned here.
> +		 */
> +		BUILD_BUG_ON(HUGETLB_PAGE_ORDER >= MAX_ORDER);

Unfortunately the build test fails on both the platforms (powerpc and ia64)
which subscribe HUGETLB_PAGE_SIZE_VARIABLE and where this check would make
sense. I some how overlooked the cross compile build failure that actually
detected this problem.

But wondering why this assert is not holding true ? and how these platforms
do not see the warning during boot (or do they ?) at mm/vmscan.c:1092 like
arm64 did.

static int __fragmentation_index(unsigned int order, struct contig_page_info *info)
{
        unsigned long requested = 1UL << order;

        if (WARN_ON_ONCE(order >= MAX_ORDER))
                return 0;
....

Can pageblock_order really exceed MAX_ORDER - 1 ?

>  		order = HUGETLB_PAGE_ORDER;
> -	else
> +	} else {
>  		order = MAX_ORDER - 1;
> +	}
>  
>  	/*
>  	 * Assume the largest contiguous order of interest is a huge page.
>
kernel test robot April 12, 2021, 8:10 a.m. UTC | #2
Hi Anshuman,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on hnaz-linux-mm/master]

url:    https://github.com/0day-ci/linux/commits/Anshuman-Khandual/mm-page_alloc-Ensure-that-HUGETLB_PAGE_ORDER-is-less-than-MAX_ORDER/20210412-114918
base:   https://github.com/hnaz/linux-mm master
config: ia64-allmodconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/2dcbbc0896348946a9551948765b9b242cf37da6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Anshuman-Khandual/mm-page_alloc-Ensure-that-HUGETLB_PAGE_ORDER-is-less-than-MAX_ORDER/20210412-114918
        git checkout 2dcbbc0896348946a9551948765b9b242cf37da6
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from <command-line>:
   mm/page_alloc.c: In function 'set_pageblock_order':
>> include/linux/compiler_types.h:319:38: error: call to '__compiletime_assert_442' declared with attribute error: BUILD_BUG_ON failed: HUGETLB_PAGE_ORDER >= MAX_ORDER
     319 |  _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
         |                                      ^
   include/linux/compiler_types.h:300:4: note: in definition of macro '__compiletime_assert'
     300 |    prefix ## suffix();    \
         |    ^~~~~~
   include/linux/compiler_types.h:319:2: note: in expansion of macro '_compiletime_assert'
     319 |  _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
         |  ^~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
      39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
         |                                     ^~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:50:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
      50 |  BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
         |  ^~~~~~~~~~~~~~~~
   mm/page_alloc.c:6660:3: note: in expansion of macro 'BUILD_BUG_ON'
    6660 |   BUILD_BUG_ON(HUGETLB_PAGE_ORDER >= MAX_ORDER);
         |   ^~~~~~~~~~~~

Kconfig warnings: (for reference only)
   WARNING: unmet direct dependencies detected for FRAME_POINTER
   Depends on DEBUG_KERNEL && (M68K || UML || SUPERH) || ARCH_WANT_FRAME_POINTERS
   Selected by
   - FAULT_INJECTION_STACKTRACE_FILTER && FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT && !X86_64 && !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86


vim +/__compiletime_assert_442 +319 include/linux/compiler_types.h

cb2b5d59ec351b Andrew Morton 2020-08-01  305  
cb2b5d59ec351b Andrew Morton 2020-08-01  306  #define _compiletime_assert(condition, msg, prefix, suffix) \
cb2b5d59ec351b Andrew Morton 2020-08-01  307  	__compiletime_assert(condition, msg, prefix, suffix)
cb2b5d59ec351b Andrew Morton 2020-08-01  308  
cb2b5d59ec351b Andrew Morton 2020-08-01  309  /**
cb2b5d59ec351b Andrew Morton 2020-08-01  310   * compiletime_assert - break build and emit msg if condition is false
cb2b5d59ec351b Andrew Morton 2020-08-01  311   * @condition: a compile-time constant condition to check
cb2b5d59ec351b Andrew Morton 2020-08-01  312   * @msg:       a message to emit if condition is false
cb2b5d59ec351b Andrew Morton 2020-08-01  313   *
cb2b5d59ec351b Andrew Morton 2020-08-01  314   * In tradition of POSIX assert, this macro will break the build if the
cb2b5d59ec351b Andrew Morton 2020-08-01  315   * supplied condition is *false*, emitting the supplied error message if the
cb2b5d59ec351b Andrew Morton 2020-08-01  316   * compiler has support to do so.
cb2b5d59ec351b Andrew Morton 2020-08-01  317   */
cb2b5d59ec351b Andrew Morton 2020-08-01  318  #define compiletime_assert(condition, msg) \
cb2b5d59ec351b Andrew Morton 2020-08-01 @319  	_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
cb2b5d59ec351b Andrew Morton 2020-08-01  320  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
David Hildenbrand April 12, 2021, 8:47 a.m. UTC | #3
On 12.04.21 10:06, Anshuman Khandual wrote:
> + linuxppc-dev@lists.ozlabs.org
> + linux-ia64@vger.kernel.org
> 
> On 4/12/21 9:18 AM, Anshuman Khandual wrote:
>> pageblock_order must always be less than MAX_ORDER, otherwise it might lead
>> to an warning during boot. A similar problem got fixed on arm64 platform
>> with the commit 79cc2ed5a716 ("arm64/mm: Drop THP conditionality from
>> FORCE_MAX_ZONEORDER"). Assert the above condition before HUGETLB_PAGE_ORDER
>> gets assigned as pageblock_order. This will help detect the problem earlier
>> on platforms where HUGETLB_PAGE_SIZE_VARIABLE is enabled.
>>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> Changes in V2:
>>
>> - Changed WARN_ON() to BUILD_BUG_ON() per David
>>
>> Changes in V1:
>>
>> https://patchwork.kernel.org/project/linux-mm/patch/1617947717-2424-1-git-send-email-anshuman.khandual@arm.com/
>>
>>   mm/page_alloc.c | 11 +++++++++--
>>   1 file changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index cfc72873961d..19283bff4bec 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6875,10 +6875,17 @@ void __init set_pageblock_order(void)
>>   	if (pageblock_order)
>>   		return;
>>   
>> -	if (HPAGE_SHIFT > PAGE_SHIFT)
>> +	if (HPAGE_SHIFT > PAGE_SHIFT) {
>> +		/*
>> +		 * pageblock_order must always be less than
>> +		 * MAX_ORDER. So does HUGETLB_PAGE_ORDER if
>> +		 * that is being assigned here.
>> +		 */
>> +		BUILD_BUG_ON(HUGETLB_PAGE_ORDER >= MAX_ORDER);
> 
> Unfortunately the build test fails on both the platforms (powerpc and ia64)
> which subscribe HUGETLB_PAGE_SIZE_VARIABLE and where this check would make
> sense. I some how overlooked the cross compile build failure that actually
> detected this problem.
> 
> But wondering why this assert is not holding true ? and how these platforms
> do not see the warning during boot (or do they ?) at mm/vmscan.c:1092 like
> arm64 did.
> 
> static int __fragmentation_index(unsigned int order, struct contig_page_info *info)
> {
>          unsigned long requested = 1UL << order;
> 
>          if (WARN_ON_ONCE(order >= MAX_ORDER))
>                  return 0;
> ....
> 
> Can pageblock_order really exceed MAX_ORDER - 1 ?

Ehm, for now I was under the impression that such configurations 
wouldn't exist.

And originally, HUGETLB_PAGE_SIZE_VARIABLE was introduced to handle 
hugepage sizes that all *smaller* than MAX_ORDER - 1: See d9c234005227 
("Do not depend on MAX_ORDER when grouping pages by mobility")


However, looking into init_cma_reserved_pageblock():

	if (pageblock_order >= MAX_ORDER) {
		i = pageblock_nr_pages;
		...
	}


But it's kind of weird, isn't it? Let's assume we have MAX_ORDER - 1 
correspond to 4 MiB and pageblock_order correspond to 8 MiB.

Sure, we'd be grouping pages in 8 MiB chunks, however, we cannot even 
allocate 8 MiB chunks via the buddy. So only alloc_contig_range() could 
really grab them (IOW: gigantic pages).

Further, we have code like deferred_free_range(), where we end up 
calling __free_pages_core()->...->__free_one_page() with 
pageblock_order. Wouldn't we end up setting the buddy order to something 
 > MAX_ORDER -1 on that path?

Having pageblock_order > MAX_ORDER feels wrong and looks shaky.
Anshuman Khandual April 19, 2021, 3:45 a.m. UTC | #4
On 4/12/21 2:17 PM, David Hildenbrand wrote:
> On 12.04.21 10:06, Anshuman Khandual wrote:
>> + linuxppc-dev@lists.ozlabs.org
>> + linux-ia64@vger.kernel.org
>>
>> On 4/12/21 9:18 AM, Anshuman Khandual wrote:
>>> pageblock_order must always be less than MAX_ORDER, otherwise it might lead
>>> to an warning during boot. A similar problem got fixed on arm64 platform
>>> with the commit 79cc2ed5a716 ("arm64/mm: Drop THP conditionality from
>>> FORCE_MAX_ZONEORDER"). Assert the above condition before HUGETLB_PAGE_ORDER
>>> gets assigned as pageblock_order. This will help detect the problem earlier
>>> on platforms where HUGETLB_PAGE_SIZE_VARIABLE is enabled.
>>>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: linux-mm@kvack.org
>>> Cc: linux-kernel@vger.kernel.org
>>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>> ---
>>> Changes in V2:
>>>
>>> - Changed WARN_ON() to BUILD_BUG_ON() per David
>>>
>>> Changes in V1:
>>>
>>> https://patchwork.kernel.org/project/linux-mm/patch/1617947717-2424-1-git-send-email-anshuman.khandual@arm.com/
>>>
>>>   mm/page_alloc.c | 11 +++++++++--
>>>   1 file changed, 9 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index cfc72873961d..19283bff4bec 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -6875,10 +6875,17 @@ void __init set_pageblock_order(void)
>>>       if (pageblock_order)
>>>           return;
>>>   -    if (HPAGE_SHIFT > PAGE_SHIFT)
>>> +    if (HPAGE_SHIFT > PAGE_SHIFT) {
>>> +        /*
>>> +         * pageblock_order must always be less than
>>> +         * MAX_ORDER. So does HUGETLB_PAGE_ORDER if
>>> +         * that is being assigned here.
>>> +         */
>>> +        BUILD_BUG_ON(HUGETLB_PAGE_ORDER >= MAX_ORDER);
>>
>> Unfortunately the build test fails on both the platforms (powerpc and ia64)
>> which subscribe HUGETLB_PAGE_SIZE_VARIABLE and where this check would make
>> sense. I some how overlooked the cross compile build failure that actually
>> detected this problem.
>>
>> But wondering why this assert is not holding true ? and how these platforms
>> do not see the warning during boot (or do they ?) at mm/vmscan.c:1092 like
>> arm64 did.
>>
>> static int __fragmentation_index(unsigned int order, struct contig_page_info *info)
>> {
>>          unsigned long requested = 1UL << order;
>>
>>          if (WARN_ON_ONCE(order >= MAX_ORDER))
>>                  return 0;
>> ....
>>
>> Can pageblock_order really exceed MAX_ORDER - 1 ?
> 
> Ehm, for now I was under the impression that such configurations wouldn't exist.
> 
> And originally, HUGETLB_PAGE_SIZE_VARIABLE was introduced to handle hugepage sizes that all *smaller* than MAX_ORDER - 1: See d9c234005227 ("Do not depend on MAX_ORDER when grouping pages by mobility")

Right.

> 
> 
> However, looking into init_cma_reserved_pageblock():
> 
>     if (pageblock_order >= MAX_ORDER) {
>         i = pageblock_nr_pages;
>         ...
>     }
> 
> 
> But it's kind of weird, isn't it? Let's assume we have MAX_ORDER - 1 correspond to 4 MiB and pageblock_order correspond to 8 MiB.
> 
> Sure, we'd be grouping pages in 8 MiB chunks, however, we cannot even allocate 8 MiB chunks via the buddy. So only alloc_contig_range() could really grab them (IOW: gigantic pages).

Right.

> 
> Further, we have code like deferred_free_range(), where we end up calling __free_pages_core()->...->__free_one_page() with pageblock_order. Wouldn't we end up setting the buddy order to something > MAX_ORDER -1 on that path?

Agreed.

> 
> Having pageblock_order > MAX_ORDER feels wrong and looks shaky.
> 
Agreed, definitely does not look right. Lets see what other folks
might have to say on this.

+ Christoph Lameter <cl@linux.com>
Christoph Lameter April 19, 2021, 10:48 a.m. UTC | #5
On Mon, 19 Apr 2021, Anshuman Khandual wrote:

> >> Unfortunately the build test fails on both the platforms (powerpc and ia64)
> >> which subscribe HUGETLB_PAGE_SIZE_VARIABLE and where this check would make
> >> sense. I some how overlooked the cross compile build failure that actually
> >> detected this problem.
> >>
> >> But wondering why this assert is not holding true ? and how these platforms
> >> do not see the warning during boot (or do they ?) at mm/vmscan.c:1092 like
> >> arm64 did.
> >>
> >> static int __fragmentation_index(unsigned int order, struct contig_page_info *info)
> >> {
> >>          unsigned long requested = 1UL << order;
> >>
> >>          if (WARN_ON_ONCE(order >= MAX_ORDER))
> >>                  return 0;
> >> ....
> >>
> >> Can pageblock_order really exceed MAX_ORDER - 1 ?

You can have larger blocks but you would need to allocate multiple
contigous max order blocks or do it at boot time before the buddy
allocator is active.

What IA64 did was to do this at boot time thereby avoiding the buddy
lists. And it had a separate virtual address range and page table for the
huge pages.

Looks like the current code does these allocations via CMA which should
also bypass the buddy allocator.


> >     }
> >
> >
> > But it's kind of weird, isn't it? Let's assume we have MAX_ORDER - 1 correspond to 4 MiB and pageblock_order correspond to 8 MiB.
> >
> > Sure, we'd be grouping pages in 8 MiB chunks, however, we cannot even
> > allocate 8 MiB chunks via the buddy. So only alloc_contig_range()
> > could really grab them (IOW: gigantic pages).
>
> Right.

But then you can avoid the buddy allocator.

> > Further, we have code like deferred_free_range(), where we end up
> > calling __free_pages_core()->...->__free_one_page() with
> > pageblock_order. Wouldn't we end up setting the buddy order to
> > something > MAX_ORDER -1 on that path?
>
> Agreed.

We would need to return the supersized block to the huge page pool and not
to the buddy allocator. There is a special callback in the compound page
sos that you can call an alternate free function that is not the buddy
allocator.

>
> >
> > Having pageblock_order > MAX_ORDER feels wrong and looks shaky.
> >
> Agreed, definitely does not look right. Lets see what other folks
> might have to say on this.
>
> + Christoph Lameter <cl@linux.com>
>

It was done for a long time successfully and is running in numerous
configurations.
David Hildenbrand April 20, 2021, 9:03 a.m. UTC | #6
Hi Christoph,

thanks for your insight.

> You can have larger blocks but you would need to allocate multiple
> contigous max order blocks or do it at boot time before the buddy
> allocator is active.
> 
> What IA64 did was to do this at boot time thereby avoiding the buddy
> lists. And it had a separate virtual address range and page table for the
> huge pages.
> 
> Looks like the current code does these allocations via CMA which should
> also bypass the buddy allocator.

Using CMA doesn't really care about the pageblock size when it comes to 
fragmentation avoidance a.k.a. somewhat reliable allocation of memory 
chunks with an order > MAX_ORDER - 1.

IOW, when using CMA for hugetlb, we don't need pageblock_order > 
MAX_ORDER - 1.

> 
> 
>>>      }
>>>
>>>
>>> But it's kind of weird, isn't it? Let's assume we have MAX_ORDER - 1 correspond to 4 MiB and pageblock_order correspond to 8 MiB.
>>>
>>> Sure, we'd be grouping pages in 8 MiB chunks, however, we cannot even
>>> allocate 8 MiB chunks via the buddy. So only alloc_contig_range()
>>> could really grab them (IOW: gigantic pages).
>>
>> Right.
> 
> But then you can avoid the buddy allocator.
> 
>>> Further, we have code like deferred_free_range(), where we end up
>>> calling __free_pages_core()->...->__free_one_page() with
>>> pageblock_order. Wouldn't we end up setting the buddy order to
>>> something > MAX_ORDER -1 on that path?
>>
>> Agreed.
> 
> We would need to return the supersized block to the huge page pool and not
> to the buddy allocator. There is a special callback in the compound page
> sos that you can call an alternate free function that is not the buddy
> allocator.

Sorry, but that doesn't make any sense. We are talking about bringup 
code, where we transition from memblock to the buddy and fill the free 
page lists. Looking at the code, deferred initialization of the memmap 
is broken on these setups -- so I deferred memmap init is never enabled.

> 
>>
>>>
>>> Having pageblock_order > MAX_ORDER feels wrong and looks shaky.
>>>
>> Agreed, definitely does not look right. Lets see what other folks
>> might have to say on this.
>>
>> + Christoph Lameter <cl@linux.com>
>>
> 
> It was done for a long time successfully and is running in numerous
> configurations.

Enforcing pageblock_order < MAX_ORDER would mean that runtime allocation 
of gigantic (here:huge) pages (HUGETLB_PAGE_ORDER >= MAX_ORDER) via 
alloc_contig_pages() becomes less reliable. To compensate, relevant 
archs could switch to "hugetlb_cma=", to improve the reliability of 
runtime allocation.

I wonder which configurations we are talking about:

a) ia64

At least I couldn't care less; it's a dead architecture -- not
sure how much people care about "more reliable runtime
allocation of gigantic (here: huge) pages". Also, not sure about which 
exact configurations.

b) ppc64

We have variable hpage size only with CONFIG_PPC_BOOK3S_64. We 
initialize the hugepage either to 1M, 2M or 16M. 16M seems to be the 
primary choice.

ppc64 has CONFIG_FORCE_MAX_ZONEORDER

default "9" if PPC64 && PPC_64K_PAGES
-> 16M effective buddy maximum size
default "13" if PPC64 && !PPC_64K_PAGES
-> 16M effective buddy maximum size

So I fail to see in which scenario we even could end up with 
pageblock_order < MAX_ORDER. I did not check ppc32.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cfc72873961d..19283bff4bec 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6875,10 +6875,17 @@  void __init set_pageblock_order(void)
 	if (pageblock_order)
 		return;
 
-	if (HPAGE_SHIFT > PAGE_SHIFT)
+	if (HPAGE_SHIFT > PAGE_SHIFT) {
+		/*
+		 * pageblock_order must always be less than
+		 * MAX_ORDER. So does HUGETLB_PAGE_ORDER if
+		 * that is being assigned here.
+		 */
+		BUILD_BUG_ON(HUGETLB_PAGE_ORDER >= MAX_ORDER);
 		order = HUGETLB_PAGE_ORDER;
-	else
+	} else {
 		order = MAX_ORDER - 1;
+	}
 
 	/*
 	 * Assume the largest contiguous order of interest is a huge page.