Message ID | 1610975582-12646-2-git-send-email-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/memory_hotplug: Pre-validate the address range with platform | expand |
On 18.01.21 14:12, Anshuman Khandual wrote: > This introduces memhp_range_allowed() which can be called in various memory > hotplug paths to prevalidate the address range which is being added, with > the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range() > which provides applicable address range depending on whether linear mapping > is required or not. For ranges that require linear mapping, it calls a new > arch callback arch_get_mappable_range() which the platform can override. So > the new callback, in turn provides the platform an opportunity to configure > acceptable memory hotplug address ranges in case there are constraints. > > This mechanism will help prevent platform specific errors deep down during > hotplug calls. This drops now redundant check_hotplug_memory_addressable() > check in __add_pages() but instead adds a VM_BUG_ON() check which would In this patch, you keep the __add_pages() checks. But as discussed, we could perform it in mm/memremap.c:pagemap_range() insted and convert it to a VM_BUG_ON(). Apart from that looks good to me.
On 1/19/21 5:51 PM, David Hildenbrand wrote: > On 18.01.21 14:12, Anshuman Khandual wrote: >> This introduces memhp_range_allowed() which can be called in various memory >> hotplug paths to prevalidate the address range which is being added, with >> the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range() >> which provides applicable address range depending on whether linear mapping >> is required or not. For ranges that require linear mapping, it calls a new >> arch callback arch_get_mappable_range() which the platform can override. So >> the new callback, in turn provides the platform an opportunity to configure >> acceptable memory hotplug address ranges in case there are constraints. >> >> This mechanism will help prevent platform specific errors deep down during >> hotplug calls. This drops now redundant check_hotplug_memory_addressable() >> check in __add_pages() but instead adds a VM_BUG_ON() check which would > > In this patch, you keep the __add_pages() checks. But as discussed, we > could perform it in mm/memremap.c:pagemap_range() insted and convert it > to a VM_BUG_ON(). Just to be sure, will the following change achieve what you are suggesting here. pagemap_range() after this change, will again be the same like the V1 series. --- mm/memory_hotplug.c | 3 +-- mm/memremap.c | 12 +++++------- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 46faa914aa25..10d4ec8f349c 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -304,8 +304,7 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, if (WARN_ON_ONCE(!params->pgprot.pgprot)) return -EINVAL; - if(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, false)) - return -E2BIG; + VM_BUG_ON(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, false)); if (altmap) { /* diff --git a/mm/memremap.c b/mm/memremap.c index e15b13736f6a..26c1825756cc 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -185,6 +185,7 @@ static void dev_pagemap_percpu_release(struct percpu_ref *ref) static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, int range_id, int nid) { + const bool is_private = pgmap->type == MEMORY_DEVICE_PRIVATE; struct range *range = &pgmap->ranges[range_id]; struct dev_pagemap *conflict_pgmap; int error, is_ram; @@ -230,6 +231,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, if (error) goto err_pfn_remap; + if (!memhp_range_allowed(range->start, range_len(range), !is_private)) + goto err_pfn_remap; + mem_hotplug_begin(); /* @@ -243,7 +247,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, * the CPU, we do want the linear mapping and thus use * arch_add_memory(). */ - if (pgmap->type == MEMORY_DEVICE_PRIVATE) { + if (is_private) { error = add_pages(nid, PHYS_PFN(range->start), PHYS_PFN(range_len(range)), params); } else { @@ -253,12 +257,6 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, goto err_kasan; } - if (!memhp_range_allowed(range->start, range_len(range), true)) { - error = -ERANGE; - mem_hotplug_done(); - goto err_add_memory; - } - error = arch_add_memory(nid, range->start, range_len(range), params); }
On 20.01.21 09:33, Anshuman Khandual wrote: > > > On 1/19/21 5:51 PM, David Hildenbrand wrote: >> On 18.01.21 14:12, Anshuman Khandual wrote: >>> This introduces memhp_range_allowed() which can be called in various memory >>> hotplug paths to prevalidate the address range which is being added, with >>> the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range() >>> which provides applicable address range depending on whether linear mapping >>> is required or not. For ranges that require linear mapping, it calls a new >>> arch callback arch_get_mappable_range() which the platform can override. So >>> the new callback, in turn provides the platform an opportunity to configure >>> acceptable memory hotplug address ranges in case there are constraints. >>> >>> This mechanism will help prevent platform specific errors deep down during >>> hotplug calls. This drops now redundant check_hotplug_memory_addressable() >>> check in __add_pages() but instead adds a VM_BUG_ON() check which would >> >> In this patch, you keep the __add_pages() checks. But as discussed, we >> could perform it in mm/memremap.c:pagemap_range() insted and convert it >> to a VM_BUG_ON(). > > Just to be sure, will the following change achieve what you are > suggesting here. pagemap_range() after this change, will again > be the same like the V1 series. Yeah, as we used to have in v1. Maybe other reviewers (@Oscar?) have a different opinion. If you decide to leave as-is, please fixup the patch description. Thanks!
On Wed, Jan 20, 2021 at 11:41:53AM +0100, David Hildenbrand wrote: > On 20.01.21 09:33, Anshuman Khandual wrote: > > > > > > On 1/19/21 5:51 PM, David Hildenbrand wrote: > >> On 18.01.21 14:12, Anshuman Khandual wrote: > >>> This introduces memhp_range_allowed() which can be called in various memory > >>> hotplug paths to prevalidate the address range which is being added, with > >>> the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range() > >>> which provides applicable address range depending on whether linear mapping > >>> is required or not. For ranges that require linear mapping, it calls a new > >>> arch callback arch_get_mappable_range() which the platform can override. So > >>> the new callback, in turn provides the platform an opportunity to configure > >>> acceptable memory hotplug address ranges in case there are constraints. > >>> > >>> This mechanism will help prevent platform specific errors deep down during > >>> hotplug calls. This drops now redundant check_hotplug_memory_addressable() > >>> check in __add_pages() but instead adds a VM_BUG_ON() check which would > >> > >> In this patch, you keep the __add_pages() checks. But as discussed, we > >> could perform it in mm/memremap.c:pagemap_range() insted and convert it > >> to a VM_BUG_ON(). > > > > Just to be sure, will the following change achieve what you are > > suggesting here. pagemap_range() after this change, will again > > be the same like the V1 series. > > Yeah, as we used to have in v1. Maybe other reviewers (@Oscar?) have a > different opinion. No, I think that placing the check in pagemap_range() out of the if-else makes much more sense. Actually, unless my memory fails me that is what I suggested in v2. I plan to have a look at the series later this week as I am fairly busy atm. Thanks
On Wed, Jan 20, 2021 at 02:03:45PM +0530, Anshuman Khandual wrote: > Just to be sure, will the following change achieve what you are > suggesting here. pagemap_range() after this change, will again > be the same like the V1 series. With below diff on top it looks good to me: Reviewed-by: Oscar Salvador <osalvador@suse.de> The only nit I would have is whether the declaration of arch_get_mappable_range should be in include/linux/memory_hotplug.h. As you pointed out, arch_get_mappable_range() might be used by the platform for other purposes, and since you are defining it out of CONFIG_MEMORY_HOTPLUG anyway. Would include/linu/memory.h be a better fit? As I said, nothing to bikeshed about, just my thoughts. > --- > mm/memory_hotplug.c | 3 +-- > mm/memremap.c | 12 +++++------- > 2 files changed, 6 insertions(+), 9 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 46faa914aa25..10d4ec8f349c 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -304,8 +304,7 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > if (WARN_ON_ONCE(!params->pgprot.pgprot)) > return -EINVAL; > > - if(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, false)) > - return -E2BIG; > + VM_BUG_ON(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, false)); > > if (altmap) { > /* > diff --git a/mm/memremap.c b/mm/memremap.c > index e15b13736f6a..26c1825756cc 100644 > --- a/mm/memremap.c > +++ b/mm/memremap.c > @@ -185,6 +185,7 @@ static void dev_pagemap_percpu_release(struct percpu_ref *ref) > static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, > int range_id, int nid) > { > + const bool is_private = pgmap->type == MEMORY_DEVICE_PRIVATE; > struct range *range = &pgmap->ranges[range_id]; > struct dev_pagemap *conflict_pgmap; > int error, is_ram; > @@ -230,6 +231,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, > if (error) > goto err_pfn_remap; > > + if (!memhp_range_allowed(range->start, range_len(range), !is_private)) > + goto err_pfn_remap; > + > mem_hotplug_begin(); > > /* > @@ -243,7 +247,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, > * the CPU, we do want the linear mapping and thus use > * arch_add_memory(). > */ > - if (pgmap->type == MEMORY_DEVICE_PRIVATE) { > + if (is_private) { > error = add_pages(nid, PHYS_PFN(range->start), > PHYS_PFN(range_len(range)), params); > } else { > @@ -253,12 +257,6 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, > goto err_kasan; > } > > - if (!memhp_range_allowed(range->start, range_len(range), true)) { > - error = -ERANGE; > - mem_hotplug_done(); > - goto err_add_memory; > - } > - > error = arch_add_memory(nid, range->start, range_len(range), > params); > } > -- > 2.20.1 >
> +/* > + * Platforms should define arch_get_mappable_range() that provides > + * maximum possible addressable physical memory range for which the > + * linear mapping could be created. The platform returned address > + * range must adhere to these following semantics. > + * > + * - range.start <= range.end > + * - Range includes both end points [range.start..range.end] > + * > + * There is also a fallback definition provided here, allowing the > + * entire possible physical address range in case any platform does > + * not define arch_get_mappable_range(). > + */ > +struct range __weak arch_get_mappable_range(void) > +{ > + struct range memhp_range = { > + .start = 0UL, > + .end = -1ULL, > + }; > + return memhp_range; > +} > + > +struct range memhp_get_pluggable_range(bool need_mapping) > +{ > + const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1; Sorry, thought about that line a bit more, and I think this is just wrong (took me longer to realize as it should). The old code used this calculation to print the limit only (in a wrong way), let's recap: Assume MAX_PHYSMEM_BITS=32 max_phys = (1ULL << (32 + 1)) - 1 = 0x1ffffffffull; Ehm, these are 33 bit. OTOH, old code checked for if (max_addr >> MAX_PHYSMEM_BITS) { Which makes sense, because 0x1ffffffffull >> 32 = 1 results in "true", meaning it's to big, while 0xffffffffull >> 32 = 0 correctly results in "false", meaning the address is fine. So, this should just be const u64 max_phys = 1ULL << MAX_PHYSMEM_BITS; (similarly as calculated in virito-mem code, or in kernel/resource.c) > + struct range memhp_range; > + > + if (need_mapping) { > + memhp_range = arch_get_mappable_range(); > + if (memhp_range.start > max_phys) { > + memhp_range.start = 0; > + memhp_range.end = 0; > + } > + memhp_range.end = min_t(u64, memhp_range.end, max_phys); > + } else { > + memhp_range.start = 0; > + memhp_range.end = max_phys; > + } > + return memhp_range; > +} > +EXPORT_SYMBOL_GPL(memhp_get_pluggable_range);
On 1/22/21 2:48 PM, David Hildenbrand wrote: > >> +/* >> + * Platforms should define arch_get_mappable_range() that provides >> + * maximum possible addressable physical memory range for which the >> + * linear mapping could be created. The platform returned address >> + * range must adhere to these following semantics. >> + * >> + * - range.start <= range.end >> + * - Range includes both end points [range.start..range.end] >> + * >> + * There is also a fallback definition provided here, allowing the >> + * entire possible physical address range in case any platform does >> + * not define arch_get_mappable_range(). >> + */ >> +struct range __weak arch_get_mappable_range(void) >> +{ >> + struct range memhp_range = { >> + .start = 0UL, >> + .end = -1ULL, >> + }; >> + return memhp_range; >> +} >> + >> +struct range memhp_get_pluggable_range(bool need_mapping) >> +{ >> + const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1; > > Sorry, thought about that line a bit more, and I think this is just > wrong (took me longer to realize as it should). The old code used this > calculation to print the limit only (in a wrong way), let's recap: > > Assume MAX_PHYSMEM_BITS=32 > > max_phys = (1ULL << (32 + 1)) - 1 = 0x1ffffffffull; > > Ehm, these are 33 bit. > > OTOH, old code checked for > > if (max_addr >> MAX_PHYSMEM_BITS) { > > Which makes sense, because > > 0x1ffffffffull >> 32 = 1 > > results in "true", meaning it's to big, while > > 0xffffffffull >> 32 = 0 > > correctly results in "false", meaning the address is fine. > > > > So, this should just be > > const u64 max_phys = 1ULL << MAX_PHYSMEM_BITS; > > (similarly as calculated in virito-mem code, or in kernel/resource.c) Should this be 1ULL << MAX_PHYSMEM_BITS - 1 instead ? Currently there are three usage for this variable in the function. - if (mhp_range.start > max_phys) - mhp_range.end = min_t(u64, mhp_range.end, max_phys) - mhp_range.end = max_phys mhp_range.end being always inclusive on the higher end and could be maximum (1ULL << MAX_PHYSMEM_BITS - 1) which is 0xFFFFFFFF instead of 0x100000000 when (1ULL << MAX_PHYSMEM_BITS) is followed for a 32 bit system. This seems consistent with the default fallback (range.end = -1ULL) defined above. > > >> + struct range memhp_range; >> + >> + if (need_mapping) { >> + memhp_range = arch_get_mappable_range(); >> + if (memhp_range.start > max_phys) { >> + memhp_range.start = 0; >> + memhp_range.end = 0; >> + } >> + memhp_range.end = min_t(u64, memhp_range.end, max_phys); >> + } else { >> + memhp_range.start = 0; >> + memhp_range.end = max_phys; >> + } >> + return memhp_range; >> +} >> +EXPORT_SYMBOL_GPL(memhp_get_pluggable_range); > >
On 22.01.21 11:41, Anshuman Khandual wrote: > > On 1/22/21 2:48 PM, David Hildenbrand wrote: >> >>> +/* >>> + * Platforms should define arch_get_mappable_range() that provides >>> + * maximum possible addressable physical memory range for which the >>> + * linear mapping could be created. The platform returned address >>> + * range must adhere to these following semantics. >>> + * >>> + * - range.start <= range.end >>> + * - Range includes both end points [range.start..range.end] >>> + * >>> + * There is also a fallback definition provided here, allowing the >>> + * entire possible physical address range in case any platform does >>> + * not define arch_get_mappable_range(). >>> + */ >>> +struct range __weak arch_get_mappable_range(void) >>> +{ >>> + struct range memhp_range = { >>> + .start = 0UL, >>> + .end = -1ULL, >>> + }; >>> + return memhp_range; >>> +} >>> + >>> +struct range memhp_get_pluggable_range(bool need_mapping) >>> +{ >>> + const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1; >> >> Sorry, thought about that line a bit more, and I think this is just >> wrong (took me longer to realize as it should). The old code used this >> calculation to print the limit only (in a wrong way), let's recap: >> >> Assume MAX_PHYSMEM_BITS=32 >> >> max_phys = (1ULL << (32 + 1)) - 1 = 0x1ffffffffull; >> >> Ehm, these are 33 bit. >> >> OTOH, old code checked for >> >> if (max_addr >> MAX_PHYSMEM_BITS) { >> >> Which makes sense, because >> >> 0x1ffffffffull >> 32 = 1 >> >> results in "true", meaning it's to big, while >> >> 0xffffffffull >> 32 = 0 >> >> correctly results in "false", meaning the address is fine. >> >> >> >> So, this should just be >> >> const u64 max_phys = 1ULL << MAX_PHYSMEM_BITS; >> >> (similarly as calculated in virito-mem code, or in kernel/resource.c) > > Should this be 1ULL << MAX_PHYSMEM_BITS - 1 instead ? Currently there are Yes, obviously, sorry, forgot the -1.
On 22.01.21 11:42, David Hildenbrand wrote: > On 22.01.21 11:41, Anshuman Khandual wrote: >> >> On 1/22/21 2:48 PM, David Hildenbrand wrote: >>> >>>> +/* >>>> + * Platforms should define arch_get_mappable_range() that provides >>>> + * maximum possible addressable physical memory range for which the >>>> + * linear mapping could be created. The platform returned address >>>> + * range must adhere to these following semantics. >>>> + * >>>> + * - range.start <= range.end >>>> + * - Range includes both end points [range.start..range.end] >>>> + * >>>> + * There is also a fallback definition provided here, allowing the >>>> + * entire possible physical address range in case any platform does >>>> + * not define arch_get_mappable_range(). >>>> + */ >>>> +struct range __weak arch_get_mappable_range(void) >>>> +{ >>>> + struct range memhp_range = { >>>> + .start = 0UL, >>>> + .end = -1ULL, >>>> + }; >>>> + return memhp_range; >>>> +} >>>> + >>>> +struct range memhp_get_pluggable_range(bool need_mapping) >>>> +{ >>>> + const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1; >>> >>> Sorry, thought about that line a bit more, and I think this is just >>> wrong (took me longer to realize as it should). The old code used this >>> calculation to print the limit only (in a wrong way), let's recap: >>> >>> Assume MAX_PHYSMEM_BITS=32 >>> >>> max_phys = (1ULL << (32 + 1)) - 1 = 0x1ffffffffull; >>> >>> Ehm, these are 33 bit. >>> >>> OTOH, old code checked for >>> >>> if (max_addr >> MAX_PHYSMEM_BITS) { >>> >>> Which makes sense, because >>> >>> 0x1ffffffffull >> 32 = 1 >>> >>> results in "true", meaning it's to big, while >>> >>> 0xffffffffull >> 32 = 0 >>> >>> correctly results in "false", meaning the address is fine. >>> >>> >>> >>> So, this should just be >>> >>> const u64 max_phys = 1ULL << MAX_PHYSMEM_BITS; >>> >>> (similarly as calculated in virito-mem code, or in kernel/resource.c) >> >> Should this be 1ULL << MAX_PHYSMEM_BITS - 1 instead ? Currently there are > > Yes, obviously, sorry, forgot the -1. > const u64 max_phys = (1ULL << MAX_PHYSMEM_BITS) - 1; to be precise.
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 15acce5ab106..439b013f818a 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -70,6 +70,9 @@ typedef int __bitwise mhp_t; */ #define MEMHP_MERGE_RESOURCE ((__force mhp_t)BIT(0)) +bool memhp_range_allowed(u64 start, u64 size, bool need_mapping); +struct range memhp_get_pluggable_range(bool need_mapping); + /* * Extended parameters for memory hotplug: * altmap: alternative allocator for memmap array (optional) @@ -281,6 +284,13 @@ static inline bool movable_node_is_enabled(void) } #endif /* ! CONFIG_MEMORY_HOTPLUG */ +/* + * Keep this declaration outside CONFIG_MEMORY_HOTPLUG as some + * platforms might override and use arch_get_mappable_range() + * for internal non memory hotplug purposes. + */ +struct range arch_get_mappable_range(void); + #if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_DEFERRED_STRUCT_PAGE_INIT) /* * pgdat resizing functions diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index f9d57b9be8c7..f62664e77ff9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -107,6 +107,9 @@ static struct resource *register_memory_resource(u64 start, u64 size, if (strcmp(resource_name, "System RAM")) flags |= IORESOURCE_SYSRAM_DRIVER_MANAGED; + if (!memhp_range_allowed(start, size, true)) + return ERR_PTR(-E2BIG); + /* * Make sure value parsed from 'mem=' only restricts memory adding * while booting, so that memory hotplug won't be impacted. Please @@ -284,22 +287,6 @@ static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, return 0; } -static int check_hotplug_memory_addressable(unsigned long pfn, - unsigned long nr_pages) -{ - const u64 max_addr = PFN_PHYS(pfn + nr_pages) - 1; - - if (max_addr >> MAX_PHYSMEM_BITS) { - const u64 max_allowed = (1ull << (MAX_PHYSMEM_BITS + 1)) - 1; - WARN(1, - "Hotplugged memory exceeds maximum addressable address, range=%#llx-%#llx, maximum=%#llx\n", - (u64)PFN_PHYS(pfn), max_addr, max_allowed); - return -E2BIG; - } - - return 0; -} - /* * Reasonably generic function for adding memory. It is * expected that archs that support memory hotplug will @@ -317,9 +304,8 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, if (WARN_ON_ONCE(!params->pgprot.pgprot)) return -EINVAL; - err = check_hotplug_memory_addressable(pfn, nr_pages); - if (err) - return err; + if(!memhp_range_allowed(PFN_PHYS(pfn), nr_pages * PAGE_SIZE, false)) + return -E2BIG; if (altmap) { /* @@ -1180,6 +1166,61 @@ int add_memory_driver_managed(int nid, u64 start, u64 size, } EXPORT_SYMBOL_GPL(add_memory_driver_managed); +/* + * Platforms should define arch_get_mappable_range() that provides + * maximum possible addressable physical memory range for which the + * linear mapping could be created. The platform returned address + * range must adhere to these following semantics. + * + * - range.start <= range.end + * - Range includes both end points [range.start..range.end] + * + * There is also a fallback definition provided here, allowing the + * entire possible physical address range in case any platform does + * not define arch_get_mappable_range(). + */ +struct range __weak arch_get_mappable_range(void) +{ + struct range memhp_range = { + .start = 0UL, + .end = -1ULL, + }; + return memhp_range; +} + +struct range memhp_get_pluggable_range(bool need_mapping) +{ + const u64 max_phys = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1; + struct range memhp_range; + + if (need_mapping) { + memhp_range = arch_get_mappable_range(); + if (memhp_range.start > max_phys) { + memhp_range.start = 0; + memhp_range.end = 0; + } + memhp_range.end = min_t(u64, memhp_range.end, max_phys); + } else { + memhp_range.start = 0; + memhp_range.end = max_phys; + } + return memhp_range; +} +EXPORT_SYMBOL_GPL(memhp_get_pluggable_range); + +bool memhp_range_allowed(u64 start, u64 size, bool need_mapping) +{ + struct range memhp_range = memhp_get_pluggable_range(need_mapping); + u64 end = start + size; + + if (start < end && start >= memhp_range.start && (end - 1) <= memhp_range.end) + return true; + + pr_warn("Hotplug memory [%#llx-%#llx] exceeds maximum addressable range [%#llx-%#llx]\n", + start, end, memhp_range.start, memhp_range.end); + return false; +} + #ifdef CONFIG_MEMORY_HOTREMOVE /* * Confirm all pages in a range [start, end) belong to the same zone (skipping diff --git a/mm/memremap.c b/mm/memremap.c index 16b2fb482da1..e15b13736f6a 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -253,6 +253,12 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, goto err_kasan; } + if (!memhp_range_allowed(range->start, range_len(range), true)) { + error = -ERANGE; + mem_hotplug_done(); + goto err_add_memory; + } + error = arch_add_memory(nid, range->start, range_len(range), params); }
This introduces memhp_range_allowed() which can be called in various memory hotplug paths to prevalidate the address range which is being added, with the platform. Then memhp_range_allowed() calls memhp_get_pluggable_range() which provides applicable address range depending on whether linear mapping is required or not. For ranges that require linear mapping, it calls a new arch callback arch_get_mappable_range() which the platform can override. So the new callback, in turn provides the platform an opportunity to configure acceptable memory hotplug address ranges in case there are constraints. This mechanism will help prevent platform specific errors deep down during hotplug calls. This drops now redundant check_hotplug_memory_addressable() check in __add_pages() but instead adds a VM_BUG_ON() check which would ensure that the range has been validated with memhp_range_allowed() earlier in the call chain. Besides memhp_get_pluggable_range() also can be used by potential memory hotplug callers to avail the allowed physical range which would go through on a given platform. This does not really add any new range check in generic memory hotplug but instead compensates for lost checks in arch_add_memory() where applicable and check_hotplug_memory_addressable(), with unified memhp_range_allowed(). Cc: David Hildenbrand <david@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Suggested-by: David Hildenbrand <david@redhat.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> --- include/linux/memory_hotplug.h | 10 +++++ mm/memory_hotplug.c | 79 ++++++++++++++++++++++++++-------- mm/memremap.c | 6 +++ 3 files changed, 76 insertions(+), 19 deletions(-)