diff mbox series

[v12,4/7] mm: hotplug: introduce SECTION_CANNOT_OPTIMIZE_VMEMMAP

Message ID 20220516102211.41557-5-songmuchun@bytedance.com (mailing list archive)
State New
Headers show
Series add hugetlb_optimize_vmemmap sysctl | expand

Commit Message

Muchun Song May 16, 2022, 10:22 a.m. UTC
For now, the feature of hugetlb_free_vmemmap is not compatible with the
feature of memory_hotplug.memmap_on_memory, and hugetlb_free_vmemmap
takes precedence over memory_hotplug.memmap_on_memory. However, someone
wants to make memory_hotplug.memmap_on_memory takes precedence over
hugetlb_free_vmemmap since memmap_on_memory makes it more likely to
succeed memory hotplug in close-to-OOM situations.  So the decision
of making hugetlb_free_vmemmap take precedence is not wise and elegant.
The proper approach is to have hugetlb_vmemmap.c do the check whether
the section which the HugeTLB pages belong to can be optimized.  If
the section's vmemmap pages are allocated from the added memory block
itself, hugetlb_free_vmemmap should refuse to optimize the vmemmap,
otherwise, do the optimization.  Then both kernel parameters are
compatible.  So this patch introduces SECTION_CANNOT_OPTIMIZE_VMEMMAP
to indicate whether the section could be optimized.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 22 +++++++++++-----------
 include/linux/mmzone.h                          | 17 +++++++++++++++++
 mm/hugetlb_vmemmap.c                            | 16 +++++++++++++++-
 mm/memory_hotplug.c                             |  1 -
 mm/sparse.c                                     |  7 +++++++
 5 files changed, 50 insertions(+), 13 deletions(-)

Comments

Oscar Salvador May 16, 2022, 10:38 a.m. UTC | #1
On Mon, May 16, 2022 at 06:22:08PM +0800, Muchun Song wrote:
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -913,6 +913,13 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
>  	ms = __nr_to_section(section_nr);
>  	set_section_nid(section_nr, nid);
>  	__section_mark_present(ms, section_nr);
> +	/*
> +	 * Mark whole section as non-optimizable once there is a subsection
> +	 * whose vmemmap pages are allocated from alternative allocator. The
> +	 * early section is always optimizable.
> +	 */
> +	if (!early_section(ms) && altmap)
> +		section_mark_cannot_optimize_vmemmap(ms);

Because no one expects those sections to be removed?
IIRC, early_section + altmap only happened in case of sub-section pmem
scenario? I guess my question is: can we really have early_sections coming
from alternative allocator?

I think this should be spelled out more.
Muchun Song May 16, 2022, 12:03 p.m. UTC | #2
On Mon, May 16, 2022 at 12:38:46PM +0200, Oscar Salvador wrote:
> On Mon, May 16, 2022 at 06:22:08PM +0800, Muchun Song wrote:
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -913,6 +913,13 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
> >  	ms = __nr_to_section(section_nr);
> >  	set_section_nid(section_nr, nid);
> >  	__section_mark_present(ms, section_nr);
> > +	/*
> > +	 * Mark whole section as non-optimizable once there is a subsection
> > +	 * whose vmemmap pages are allocated from alternative allocator. The
> > +	 * early section is always optimizable.
> > +	 */
> > +	if (!early_section(ms) && altmap)
> > +		section_mark_cannot_optimize_vmemmap(ms);
> 
> Because no one expects those sections to be removed?
> IIRC, early_section + altmap only happened in case of sub-section pmem
> scenario?

Right. The commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
has more information.

> I guess my question is: can we really have early_sections coming
> from alternative allocator?
>

We can't. The early section does not consider partially being
populated currently.

> I think this should be spelled out more.

I think you mean add some comments here to describe the case
of early_section + altmap, right?

Thanks.
Oscar Salvador May 17, 2022, 7:52 a.m. UTC | #3
On Mon, May 16, 2022 at 08:03:49PM +0800, Muchun Song wrote:
> On Mon, May 16, 2022 at 12:38:46PM +0200, Oscar Salvador wrote:
> > On Mon, May 16, 2022 at 06:22:08PM +0800, Muchun Song wrote:
> > > --- a/mm/sparse.c
> > > +++ b/mm/sparse.c
> > > @@ -913,6 +913,13 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
> > >  	ms = __nr_to_section(section_nr);
> > >  	set_section_nid(section_nr, nid);
> > >  	__section_mark_present(ms, section_nr);
> > > +	/*
> > > +	 * Mark whole section as non-optimizable once there is a subsection
> > > +	 * whose vmemmap pages are allocated from alternative allocator. The
> > > +	 * early section is always optimizable.
> > > +	 */
> > > +	if (!early_section(ms) && altmap)
> > > +		section_mark_cannot_optimize_vmemmap(ms);
> > 
> > Because no one expects those sections to be removed?
> > IIRC, early_section + altmap only happened in case of sub-section pmem
> > scenario?
> 
> Right. The commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> has more information.
> 
> > I guess my question is: can we really have early_sections coming
> > from alternative allocator?
> >
> 
> We can't. The early section does not consider partially being
> populated currently.

Then, IIUC, we can forget about the early_section() check?
Muchun Song May 17, 2022, 8:10 a.m. UTC | #4
On Tue, May 17, 2022 at 09:52:36AM +0200, Oscar Salvador wrote:
> On Mon, May 16, 2022 at 08:03:49PM +0800, Muchun Song wrote:
> > On Mon, May 16, 2022 at 12:38:46PM +0200, Oscar Salvador wrote:
> > > On Mon, May 16, 2022 at 06:22:08PM +0800, Muchun Song wrote:
> > > > --- a/mm/sparse.c
> > > > +++ b/mm/sparse.c
> > > > @@ -913,6 +913,13 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
> > > >  	ms = __nr_to_section(section_nr);
> > > >  	set_section_nid(section_nr, nid);
> > > >  	__section_mark_present(ms, section_nr);
> > > > +	/*
> > > > +	 * Mark whole section as non-optimizable once there is a subsection
> > > > +	 * whose vmemmap pages are allocated from alternative allocator. The
> > > > +	 * early section is always optimizable.
> > > > +	 */
> > > > +	if (!early_section(ms) && altmap)
> > > > +		section_mark_cannot_optimize_vmemmap(ms);
> > > 
> > > Because no one expects those sections to be removed?
> > > IIRC, early_section + altmap only happened in case of sub-section pmem
> > > scenario?
> > 
> > Right. The commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> > has more information.
> > 
> > > I guess my question is: can we really have early_sections coming
> > > from alternative allocator?
> > >
> > 
> > We can't. The early section does not consider partially being
> > populated currently.
> 
> Then, IIUC, we can forget about the early_section() check?
>

Sorry for the confusing. I mean early_section() should be checked.
I could find a comment in section_activate, that says:

	/*
	 * The early init code does not consider partially populated
	 * initial sections, it simply assumes that memory will never be
	 * referenced.  If we hot-add memory into such a section then we
	 * do not need to populate the memmap and can simply reuse what
	 * is already there.
	 */
	if (nr_pages < PAGES_PER_SECTION && early_section(ms))
		return pfn_to_page(pfn);
 
We can see that we could hot-add a sub-section within a early section.
So I think early_section + altmap could happened in this case, then
we could not drop that check. Right?

Thanks.
diff mbox series

Patch

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 308da668bbb1..a0a014f2104c 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1711,9 +1711,11 @@ 
 			Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=y,
 			the default is on.
 
-			This is not compatible with memory_hotplug.memmap_on_memory.
-			If both parameters are enabled, hugetlb_free_vmemmap takes
-			precedence over memory_hotplug.memmap_on_memory.
+			Note that the vmemmap pages may be allocated from the added
+			memory block itself when memory_hotplug.memmap_on_memory is
+			enabled, those vmemmap pages cannot be optimized even if this
+			feature is enabled.  Other vmemmap pages not allocated from
+			the added memory block itself do not be affected.
 
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
@@ -3038,10 +3040,12 @@ 
 			[KNL,X86,ARM] Boolean flag to enable this feature.
 			Format: {on | off (default)}
 			When enabled, runtime hotplugged memory will
-			allocate its internal metadata (struct pages)
-			from the hotadded memory which will allow to
-			hotadd a lot of memory without requiring
-			additional memory to do so.
+			allocate its internal metadata (struct pages,
+			those vmemmap pages cannot be optimized even
+			if hugetlb_free_vmemmap is enabled) from the
+			hotadded memory which will allow to hotadd a
+			lot of memory without requiring additional
+			memory to do so.
 			This feature is disabled by default because it
 			has some implication on large (e.g. GB)
 			allocations in some configurations (e.g. small
@@ -3051,10 +3055,6 @@ 
 			Note that even when enabled, there are a few cases where
 			the feature is not effective.
 
-			This is not compatible with hugetlb_free_vmemmap. If
-			both parameters are enabled, hugetlb_free_vmemmap takes
-			precedence over memory_hotplug.memmap_on_memory.
-
 	memtest=	[KNL,X86,ARM,M68K,PPC,RISCV] Enable memtest
 			Format: <integer>
 			default : 0 <disable>
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index af057e20b9d7..7b69acc5c2a9 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1430,6 +1430,7 @@  extern size_t mem_section_usage_size(void);
 	MAPPER(IS_ONLINE)							\
 	MAPPER(IS_EARLY)							\
 	MAPPER(TAINT_ZONE_DEVICE, CONFIG_ZONE_DEVICE)				\
+	MAPPER(CANNOT_OPTIMIZE_VMEMMAP, CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP)	\
 	MAPPER(MAP_LAST_BIT)
 
 #define __SECTION_SHIFT_FLAG_MAPPER_0(x)
@@ -1457,6 +1458,22 @@  static inline struct page *__section_mem_map_addr(struct mem_section *section)
 	return (struct page *)map;
 }
 
+#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+static inline void section_mark_cannot_optimize_vmemmap(struct mem_section *ms)
+{
+	ms->section_mem_map |= SECTION_CANNOT_OPTIMIZE_VMEMMAP;
+}
+
+static inline int section_cannot_optimize_vmemmap(struct mem_section *ms)
+{
+	return (ms && (ms->section_mem_map & SECTION_CANNOT_OPTIMIZE_VMEMMAP));
+}
+#else
+static inline void section_mark_cannot_optimize_vmemmap(struct mem_section *ms)
+{
+}
+#endif
+
 static inline int present_section(struct mem_section *section)
 {
 	return (section && (section->section_mem_map & SECTION_MARKED_PRESENT));
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index cc4ec752ec16..970c36b8935f 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -75,12 +75,26 @@  int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head)
 	return ret;
 }
 
+static unsigned int optimizable_vmemmap_pages(struct hstate *h,
+					      struct page *head)
+{
+	unsigned long pfn = page_to_pfn(head);
+	unsigned long end = pfn + pages_per_huge_page(h);
+
+	for (; pfn < end; pfn += PAGES_PER_SECTION) {
+		if (section_cannot_optimize_vmemmap(__pfn_to_section(pfn)))
+			return 0;
+	}
+
+	return hugetlb_optimize_vmemmap_pages(h);
+}
+
 void hugetlb_vmemmap_free(struct hstate *h, struct page *head)
 {
 	unsigned long vmemmap_addr = (unsigned long)head;
 	unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages;
 
-	vmemmap_pages = hugetlb_optimize_vmemmap_pages(h);
+	vmemmap_pages = optimizable_vmemmap_pages(h, head);
 	if (!vmemmap_pages)
 		return;
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index aef3f041dec7..1d0225d57166 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1270,7 +1270,6 @@  bool mhp_supports_memmap_on_memory(unsigned long size)
 	 *       populate a single PMD.
 	 */
 	return memmap_on_memory &&
-	       !hugetlb_optimize_vmemmap_enabled() &&
 	       IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
 	       size == memory_block_size_bytes() &&
 	       IS_ALIGNED(vmemmap_size, PMD_SIZE) &&
diff --git a/mm/sparse.c b/mm/sparse.c
index d2d76d158b39..8197ef9b7c4c 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -913,6 +913,13 @@  int __meminit sparse_add_section(int nid, unsigned long start_pfn,
 	ms = __nr_to_section(section_nr);
 	set_section_nid(section_nr, nid);
 	__section_mark_present(ms, section_nr);
+	/*
+	 * Mark whole section as non-optimizable once there is a subsection
+	 * whose vmemmap pages are allocated from alternative allocator. The
+	 * early section is always optimizable.
+	 */
+	if (!early_section(ms) && altmap)
+		section_mark_cannot_optimize_vmemmap(ms);
 
 	/* Align memmap to section boundary in the subsection case */
 	if (section_nr_to_pfn(section_nr) != start_pfn)