diff mbox series

[v8,1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP

Message ID 20220413144748.84106-2-songmuchun@bytedance.com (mailing list archive)
State New
Headers show
Series add hugetlb_optimize_vmemmap sysctl | expand

Commit Message

Muchun Song April 13, 2022, 2:47 p.m. UTC
If the size of "struct page" is not the power of two but with the feature
of minimizing overhead of struct page associated with each HugeTLB is
enabled, then the vmemmap pages of HugeTLB will be corrupted after
remapping (panic is about to happen in theory).  But this only exists when
!CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
configuration nowadays.  So it is not a real word issue, just the result
of a code review.  But we have to prevent anyone from configuring that
combined configurations.  In order to avoid many checks like "is_power_of_2
(sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro
CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP to represent the size of struct
page is power of two and CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is
configured.  Then make the codes of this feature depends on this new macro.
Then we could prevent anyone do any unexpected configurations.  A new
autoconf_ext.h is introduced as well, which serves as an extension for
autoconf.h since those special configurations (e.g.
CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP here) are rely on the autoconf.h
(generated from Kconfig), so we cannot embed those configurations into
Kconfig.  After this change, it would be easy if someone want to do the
similar thing (add a new CONFIG) in the future.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Suggested-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Kbuild                     | 19 +++++++++++++++++++
 arch/x86/mm/init_64.c      |  2 +-
 include/linux/hugetlb.h    |  2 +-
 include/linux/kconfig.h    |  4 ++++
 include/linux/mm.h         |  2 +-
 include/linux/page-flags.h |  2 +-
 kernel/autoconf_ext.c      | 26 ++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.c       |  8 ++------
 mm/hugetlb_vmemmap.h       |  4 ++--
 mm/sparse-vmemmap.c        |  4 ++--
 scripts/mod/Makefile       |  2 ++
 11 files changed, 61 insertions(+), 14 deletions(-)
 create mode 100644 kernel/autoconf_ext.c

Comments

Andrew Morton April 13, 2022, 7:08 p.m. UTC | #1
On Wed, 13 Apr 2022 22:47:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> If the size of "struct page" is not the power of two but with the feature
> of minimizing overhead of struct page associated with each HugeTLB is
> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> remapping (panic is about to happen in theory).  But this only exists when
> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> configuration nowadays.  So it is not a real word issue, just the result
> of a code review.

The patch does add a whole bunch of tricky junk to address something
which won't happen.  How about we simply disable
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP if (!CONFIG_MEMCG &&
!CONFIG_SLUB)?
Muchun Song April 14, 2022, 3:10 a.m. UTC | #2
On Wed, Apr 13, 2022 at 12:08:04PM -0700, Andrew Morton wrote:
> On Wed, 13 Apr 2022 22:47:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> 
> > If the size of "struct page" is not the power of two but with the feature
> > of minimizing overhead of struct page associated with each HugeTLB is
> > enabled, then the vmemmap pages of HugeTLB will be corrupted after
> > remapping (panic is about to happen in theory).  But this only exists when
> > !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> > configuration nowadays.  So it is not a real word issue, just the result
> > of a code review.
> 
> The patch does add a whole bunch of tricky junk to address something
> which won't happen.  How about we simply disable
> CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP if (!CONFIG_MEMCG &&
> !CONFIG_SLUB)?
>
 
I'm afraid not. The size of 'struct page' also depends on
LAST_CPUPID_NOT_IN_PAGE_FLAGS which could be defined
when CONFIG_NODES_SHIFT or CONFIG_KASAN_SW_TAGS
or CONFIG_NR_CPUS is configured with a large value.  Then
the size would be more than 64 bytes.

Seems like the approach [1] is more simple and feasible,
which also could prevent the users from doing unexpected
configurations, however, it is objected by Masahiro.
Shall we look back at the approach again?

Thanks.
Muchun Song April 19, 2022, 6:23 a.m. UTC | #3
On Thu, Apr 14, 2022 at 11:10:01AM +0800, Muchun Song wrote:
> On Wed, Apr 13, 2022 at 12:08:04PM -0700, Andrew Morton wrote:
> > On Wed, 13 Apr 2022 22:47:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> > 
> > > If the size of "struct page" is not the power of two but with the feature
> > > of minimizing overhead of struct page associated with each HugeTLB is
> > > enabled, then the vmemmap pages of HugeTLB will be corrupted after
> > > remapping (panic is about to happen in theory).  But this only exists when
> > > !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> > > configuration nowadays.  So it is not a real word issue, just the result
> > > of a code review.
> > 
> > The patch does add a whole bunch of tricky junk to address something
> > which won't happen.  How about we simply disable
> > CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP if (!CONFIG_MEMCG &&
> > !CONFIG_SLUB)?
> >
>  
> I'm afraid not. The size of 'struct page' also depends on
> LAST_CPUPID_NOT_IN_PAGE_FLAGS which could be defined
> when CONFIG_NODES_SHIFT or CONFIG_KASAN_SW_TAGS
> or CONFIG_NR_CPUS is configured with a large value.  Then
> the size would be more than 64 bytes.
> 
> Seems like the approach [1] is more simple and feasible,

Sorry, forgot to post the Link.

[1] https://lore.kernel.org/all/20220323125523.79254-2-songmuchun@bytedance.com/

> which also could prevent the users from doing unexpected
> configurations, however, it is objected by Masahiro.
> Shall we look back at the approach again?
>

Hi all,

Friendly ping.

I have implemented 3 approaches to address this issue.

  1) V8 has added a lot of tricky code.
  2) V5 has added a feadback from Kbuild to Kconfig, as Masahiro
     said, it is terrible.
  3) V1 [2] has added a check of is_power_of_2() into hugetlb_vmemmap.c.

Iterated and explored through 8 versions, v1 seems to be the easiest way
to address this.  I think reusing v1 may be the best choice now.
What do you think?

[2] https://lore.kernel.org/all/20220228071022.26143-2-songmuchun@bytedance.com/

Thanks.
Masahiro Yamada April 20, 2022, 5:11 p.m. UTC | #4
On Wed, Apr 13, 2022 at 11:48 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> If the size of "struct page" is not the power of two but with the feature
> of minimizing overhead of struct page associated with each HugeTLB is
> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> remapping (panic is about to happen in theory).  But this only exists when
> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> configuration nowadays.  So it is not a real word issue, just the result
> of a code review.  But we have to prevent anyone from configuring that
> combined configurations.  In order to avoid many checks like "is_power_of_2
> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro
> CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP to represent the size of struct
> page is power of two and CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is
> configured.  Then make the codes of this feature depends on this new macro.
> Then we could prevent anyone do any unexpected configurations.  A new
> autoconf_ext.h is introduced as well, which serves as an extension for
> autoconf.h since those special configurations (e.g.
> CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP here) are rely on the autoconf.h
> (generated from Kconfig), so we cannot embed those configurations into
> Kconfig.  After this change, it would be easy if someone want to do the
> similar thing (add a new CONFIG) in the future.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Suggested-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>  Kbuild                     | 19 +++++++++++++++++++
>  arch/x86/mm/init_64.c      |  2 +-
>  include/linux/hugetlb.h    |  2 +-
>  include/linux/kconfig.h    |  4 ++++
>  include/linux/mm.h         |  2 +-
>  include/linux/page-flags.h |  2 +-
>  kernel/autoconf_ext.c      | 26 ++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.c       |  8 ++------
>  mm/hugetlb_vmemmap.h       |  4 ++--
>  mm/sparse-vmemmap.c        |  4 ++--
>  scripts/mod/Makefile       |  2 ++
>  11 files changed, 61 insertions(+), 14 deletions(-)
>  create mode 100644 kernel/autoconf_ext.c
>
> diff --git a/Kbuild b/Kbuild
> index fa441b98c9f6..83c0d5a418d1 100644
> --- a/Kbuild
> +++ b/Kbuild
> @@ -2,6 +2,12 @@
>  #
>  # Kbuild for top-level directory of the kernel
>
> +# autoconf_ext.h is generated last since it depends on other generated headers,
> +# however those other generated headers may include autoconf_ext.h. Use the
> +# following macro to avoid circular dependency.
> +
> +KBUILD_CFLAGS_KERNEL += -D__EXCLUDE_AUTOCONF_EXT_H
> +
>  #####
>  # Generate bounds.h
>
> @@ -37,6 +43,19 @@ $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
>         $(call filechk,offsets,__ASM_OFFSETS_H__)
>
>  #####
> +# Generate autoconf_ext.h.
> +
> +autoconf_ext-file := include/generated/autoconf_ext.h
> +
> +always-y += $(autoconf_ext-file)
> +targets += kernel/autoconf_ext.s
> +
> +kernel/autoconf_ext.s: $(bounds-file) $(timeconst-file) $(offsets-file)
> +
> +$(autoconf_ext-file): kernel/autoconf_ext.s FORCE
> +       $(call filechk,offsets,__LINUX_AUTOCONF_EXT_H__)
> +
> +#####
>  # Check for missing system calls
>
>  always-y += missing-syscalls
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 4b9e0012bbbf..9b8dfa6e4da8 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1268,7 +1268,7 @@ static struct kcore_list kcore_vsyscall;
>
>  static void __init register_page_bootmem_info(void)
>  {
> -#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP)
> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP)
>         int i;
>
>         for_each_online_node(i)
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ac2ece9e9c79..d42de8abd2b6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -623,7 +623,7 @@ struct hstate {
>         unsigned int nr_huge_pages_node[MAX_NUMNODES];
>         unsigned int free_huge_pages_node[MAX_NUMNODES];
>         unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
> +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
>         unsigned int optimize_vmemmap_pages;
>  #endif
>  #ifdef CONFIG_CGROUP_HUGETLB
> diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h
> index 20d1079e92b4..00796794f177 100644
> --- a/include/linux/kconfig.h
> +++ b/include/linux/kconfig.h
> @@ -4,6 +4,10 @@
>
>  #include <generated/autoconf.h>
>
> +#if defined(__KERNEL__) && !defined(__EXCLUDE_AUTOCONF_EXT_H)
> +#include <generated/autoconf_ext.h>
> +#endif
> +


Please do not do this either.

When autoconf_ext.h is updated, the kernel tree
would be rebuilt entirely.
Mike Kravetz April 20, 2022, 11:30 p.m. UTC | #5
On 4/20/22 10:11, Masahiro Yamada wrote:
> On Wed, Apr 13, 2022 at 11:48 PM Muchun Song <songmuchun@bytedance.com> wrote:
>>
>> If the size of "struct page" is not the power of two but with the feature
>> of minimizing overhead of struct page associated with each HugeTLB is
>> enabled, then the vmemmap pages of HugeTLB will be corrupted after
>> remapping (panic is about to happen in theory).  But this only exists when
>> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
>> configuration nowadays.  So it is not a real word issue, just the result
>> of a code review.  But we have to prevent anyone from configuring that
>> combined configurations.  In order to avoid many checks like "is_power_of_2
>> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro

Sorry for jumping in so late.  I am far from expert in Kconfig so did not pay
much attention to all those discussions.

Why not just add one (or a few) simple runtime checks for struct page not a
power of two before enabling hugetlb vmemmap optimization in the code?  Sure,
it would be ideal to never build/include the vmemmap optimization code if
this can be detected at config time.  However, it seems this is a very rare
combination and the checks for it at config time are very complex.

Would we really need many checks throughout the code as you mention above?
Or, do we only need to check or two before enabling
hugetlb_optimize_vmemmap_key?
Muchun Song April 21, 2022, 3:18 a.m. UTC | #6
On Wed, Apr 20, 2022 at 04:30:02PM -0700, Mike Kravetz wrote:
> On 4/20/22 10:11, Masahiro Yamada wrote:
> > On Wed, Apr 13, 2022 at 11:48 PM Muchun Song <songmuchun@bytedance.com> wrote:
> >>
> >> If the size of "struct page" is not the power of two but with the feature
> >> of minimizing overhead of struct page associated with each HugeTLB is
> >> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> >> remapping (panic is about to happen in theory).  But this only exists when
> >> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> >> configuration nowadays.  So it is not a real word issue, just the result
> >> of a code review.  But we have to prevent anyone from configuring that
> >> combined configurations.  In order to avoid many checks like "is_power_of_2
> >> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro
> 
> Sorry for jumping in so late.  I am far from expert in Kconfig so did not pay
> much attention to all those discussions.
> 
> Why not just add one (or a few) simple runtime checks for struct page not a
> power of two before enabling hugetlb vmemmap optimization in the code?  Sure,
> it would be ideal to never build/include the vmemmap optimization code if
> this can be detected at config time.  However, it seems this is a very rare
> combination and the checks for it at config time are very complex.

Right. Iterated and explored through 8 versions, I realized checking
it at config time is very complex.
 
> Would we really need many checks throughout the code as you mention above?
> Or, do we only need to check or two before enabling
> hugetlb_optimize_vmemmap_key?

Yep, now there is only one place where needs to check that size.
I think I should go back to v1, it is simpler.

Thanks Mike.
diff mbox series

Patch

diff --git a/Kbuild b/Kbuild
index fa441b98c9f6..83c0d5a418d1 100644
--- a/Kbuild
+++ b/Kbuild
@@ -2,6 +2,12 @@ 
 #
 # Kbuild for top-level directory of the kernel
 
+# autoconf_ext.h is generated last since it depends on other generated headers,
+# however those other generated headers may include autoconf_ext.h. Use the
+# following macro to avoid circular dependency.
+
+KBUILD_CFLAGS_KERNEL += -D__EXCLUDE_AUTOCONF_EXT_H
+
 #####
 # Generate bounds.h
 
@@ -37,6 +43,19 @@  $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
 	$(call filechk,offsets,__ASM_OFFSETS_H__)
 
 #####
+# Generate autoconf_ext.h.
+
+autoconf_ext-file := include/generated/autoconf_ext.h
+
+always-y += $(autoconf_ext-file)
+targets += kernel/autoconf_ext.s
+
+kernel/autoconf_ext.s: $(bounds-file) $(timeconst-file) $(offsets-file)
+
+$(autoconf_ext-file): kernel/autoconf_ext.s FORCE
+	$(call filechk,offsets,__LINUX_AUTOCONF_EXT_H__)
+
+#####
 # Check for missing system calls
 
 always-y += missing-syscalls
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 4b9e0012bbbf..9b8dfa6e4da8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1268,7 +1268,7 @@  static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP)
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ac2ece9e9c79..d42de8abd2b6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -623,7 +623,7 @@  struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 	unsigned int optimize_vmemmap_pages;
 #endif
 #ifdef CONFIG_CGROUP_HUGETLB
diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h
index 20d1079e92b4..00796794f177 100644
--- a/include/linux/kconfig.h
+++ b/include/linux/kconfig.h
@@ -4,6 +4,10 @@ 
 
 #include <generated/autoconf.h>
 
+#if defined(__KERNEL__) && !defined(__EXCLUDE_AUTOCONF_EXT_H)
+#include <generated/autoconf_ext.h>
+#endif
+
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define __BIG_ENDIAN 4321
 #else
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e0ad13486035..4c36f77a5745 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3186,7 +3186,7 @@  static inline void print_vma_addr(char *prefix, unsigned long rip)
 }
 #endif
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 int vmemmap_remap_free(unsigned long start, unsigned long end,
 		       unsigned long reuse);
 int vmemmap_remap_alloc(unsigned long start, unsigned long end,
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index b70124b9c7c1..e409b10cd677 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -199,7 +199,7 @@  enum pageflags {
 
 #ifndef __GENERATING_BOUNDS_H
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON,
 			 hugetlb_optimize_vmemmap_key);
 
diff --git a/kernel/autoconf_ext.c b/kernel/autoconf_ext.c
new file mode 100644
index 000000000000..8475735c6fc9
--- /dev/null
+++ b/kernel/autoconf_ext.c
@@ -0,0 +1,26 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Generate definitions needed by the preprocessor.
+ * This code generates raw asm output which is post-processed
+ * to extract and format the required data.
+ */
+#include <linux/mm_types.h>
+#include <linux/kbuild.h>
+#include <linux/log2.h>
+
+int main(void)
+{
+	if (IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) &&
+	    is_power_of_2(sizeof(struct page))) {
+		/*
+		 * The 2nd parameter of DEFINE() will go into the comments. Do
+		 * not pass 1 directly to it to make the generated macro more
+		 * clear for the readers.
+		 */
+		DEFINE(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP,
+		       IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) &&
+		       is_power_of_2(sizeof(struct page)));
+	}
+
+	return 0;
+}
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 2655434a946b..be73782cc1cf 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -178,6 +178,7 @@ 
 
 #include "hugetlb_vmemmap.h"
 
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 /*
  * There are a lot of struct page structures associated with each HugeTLB page.
  * For tail pages, the value of compound_head is the same. So we can reuse first
@@ -194,12 +195,6 @@  EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
 
 static int __init hugetlb_vmemmap_early_param(char *buf)
 {
-	/* We cannot optimize if a "struct page" crosses page boundaries. */
-	if (!is_power_of_2(sizeof(struct page))) {
-		pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
-		return 0;
-	}
-
 	if (!buf)
 		return -EINVAL;
 
@@ -300,3 +295,4 @@  void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_info("can optimize %d vmemmap pages for %s\n",
 		h->optimize_vmemmap_pages, h->name);
 }
+#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 109b0a53b6fe..3afae3ff37fa 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -10,7 +10,7 @@ 
 #define _LINUX_HUGETLB_VMEMMAP_H
 #include <linux/hugetlb.h>
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head);
 void hugetlb_vmemmap_free(struct hstate *h, struct page *head);
 void hugetlb_vmemmap_init(struct hstate *h);
@@ -41,5 +41,5 @@  static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h)
 {
 	return 0;
 }
-#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */
+#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 52f36527bab3..6c7f1a9ce2dd 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -34,7 +34,7 @@ 
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 /**
  * struct vmemmap_remap_walk - walk vmemmap page table
  *
@@ -420,7 +420,7 @@  int vmemmap_remap_alloc(unsigned long start, unsigned long end,
 
 	return 0;
 }
-#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */
+#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
 
 /*
  * Allocate a block of memory to be used to back the virtual memory map
diff --git a/scripts/mod/Makefile b/scripts/mod/Makefile
index c9e38ad937fd..f82ab128c086 100644
--- a/scripts/mod/Makefile
+++ b/scripts/mod/Makefile
@@ -1,6 +1,8 @@ 
 # SPDX-License-Identifier: GPL-2.0
 OBJECT_FILES_NON_STANDARD := y
 CFLAGS_REMOVE_empty.o += $(CC_FLAGS_LTO)
+# See comments in Kbuild
+KBUILD_CFLAGS_KERNEL += -D__EXCLUDE_AUTOCONF_EXT_H
 
 hostprogs-always-y	+= modpost mk_elfconfig
 always-y		+= empty.o