diff mbox series

+ mm-memory_hotplug-export-mhp_supports_memmap_on_memory.patch added to mm-unstable branch

Message ID 20240126021634.E2815C43390@smtp.kernel.org (mailing list archive)
State Handled Elsewhere, archived
Headers show
Series + mm-memory_hotplug-export-mhp_supports_memmap_on_memory.patch added to mm-unstable branch | expand

Commit Message

Andrew Morton Jan. 26, 2024, 2:16 a.m. UTC
The patch titled
     Subject: mm/memory_hotplug: export mhp_supports_memmap_on_memory()
has been added to the -mm mm-unstable branch.  Its filename is

This patch will shortly appear at

This patch will later appear in the mm-unstable branch at

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

From: Vishal Verma <vishal.l.verma@intel.com>
Subject: mm/memory_hotplug: export mhp_supports_memmap_on_memory()
Date: Wed, 24 Jan 2024 12:03:49 -0800

In preparation for adding sysfs ABI to toggle memmap_on_memory semantics
for drivers adding memory, export the mhp_supports_memmap_on_memory()
helper. This allows drivers to check if memmap_on_memory support is
available before trying to request it, and display an appropriate
message if it isn't available. As part of this, remove the size argument
to this - with recent updates to allow memmap_on_memory for larger
ranges, and the internal splitting of altmaps into respective memory
blocks, the size argument is meaningless.

Link: https://lkml.kernel.org/r/20240124-vv-dax_abi-v7-4-20d16cb8d23d@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Li Zhijian <lizhijian@fujitsu.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: <nvdimm@lists.linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

 include/linux/memory_hotplug.h |    6 ++++++
 mm/memory_hotplug.c            |   17 ++++++-----------
 2 files changed, 12 insertions(+), 11 deletions(-)
diff mbox series


--- a/include/linux/memory_hotplug.h~mm-memory_hotplug-export-mhp_supports_memmap_on_memory
+++ a/include/linux/memory_hotplug.h
@@ -137,6 +137,7 @@  struct mhp_params {
 bool mhp_range_allowed(u64 start, u64 size, bool need_mapping);
 struct range mhp_get_pluggable_range(bool need_mapping);
+bool mhp_supports_memmap_on_memory(void);
  * Zone resizing functions
@@ -277,6 +278,11 @@  static inline bool movable_node_is_enabl
 	return false;
+static bool mhp_supports_memmap_on_memory(void)
+	return false;
 static inline void pgdat_kswapd_lock(pg_data_t *pgdat) {}
 static inline void pgdat_kswapd_unlock(pg_data_t *pgdat) {}
--- a/mm/memory_hotplug.c~mm-memory_hotplug-export-mhp_supports_memmap_on_memory
+++ a/mm/memory_hotplug.c
@@ -1337,7 +1337,7 @@  static inline bool arch_supports_memmap_
-static bool mhp_supports_memmap_on_memory(unsigned long size)
+bool mhp_supports_memmap_on_memory(void)
 	unsigned long vmemmap_size = memory_block_memmap_size();
 	unsigned long memmap_pages = memory_block_memmap_on_memory_pages();
@@ -1346,17 +1346,11 @@  static bool mhp_supports_memmap_on_memor
 	 * Besides having arch support and the feature enabled at runtime, we
 	 * need a few more assumptions to hold true:
-	 * a) We span a single memory block: memory onlining/offlinin;g happens
-	 *    in memory block granularity. We don't want the vmemmap of online
-	 *    memory blocks to reside on offline memory blocks. In the future,
-	 *    we might want to support variable-sized memory blocks to make the
-	 *    feature more versatile.
-	 *
-	 * b) The vmemmap pages span complete PMDs: We don't want vmemmap code
+	 * a) The vmemmap pages span complete PMDs: We don't want vmemmap code
 	 *    to populate memory from the altmap for unrelated parts (i.e.,
 	 *    other memory blocks)
-	 * c) The vmemmap pages (and thereby the pages that will be exposed to
+	 * b) The vmemmap pages (and thereby the pages that will be exposed to
 	 *    the buddy) have to cover full pageblocks: memory onlining/offlining
 	 *    code requires applicable ranges to be page-aligned, for example, to
 	 *    set the migratetypes properly.
@@ -1368,7 +1362,7 @@  static bool mhp_supports_memmap_on_memor
 	 *       altmap as an alternative source of memory, and we do not exactly
 	 *       populate a single PMD.
-	if (!mhp_memmap_on_memory() || size != memory_block_size_bytes())
+	if (!mhp_memmap_on_memory())
 		return false;
@@ -1391,6 +1385,7 @@  static bool mhp_supports_memmap_on_memor
 	return arch_supports_memmap_on_memory(vmemmap_size);
 static void __ref remove_memory_blocks_and_altmaps(u64 start, u64 size)
@@ -1526,7 +1521,7 @@  int __ref add_memory_resource(int nid, s
 	 * Self hosted memmap array
 	if ((mhp_flags & MHP_MEMMAP_ON_MEMORY) &&
-	    mhp_supports_memmap_on_memory(memory_block_size_bytes())) {
+	    mhp_supports_memmap_on_memory()) {
 		ret = create_altmaps_and_memory_blocks(nid, group, start, size, mhp_flags);
 		if (ret)
 			goto error;