diff mbox series

mm/memory_hotplug: fix default_zone_for_pfn() to include highmem zone range

Message ID 20200604133938.GA1513@cosmos (mailing list archive)
State New, archived
Headers show
Series mm/memory_hotplug: fix default_zone_for_pfn() to include highmem zone range | expand

Commit Message

Vamshi K Sthambamkadi June 4, 2020, 1:39 p.m. UTC
On x86_32, while onlining highmem sections, the func default_zone_for_pfn()
defaults target zone to ZONE_NORMAL (movable_node_enabled = 0). Onlining of
pages is successful, and these highmem pages are moved into zone_normal.

As a consequence, these pages are treated as low mem, and page addresses
are calculated using lowmem_page_address() which effectively overflows the
32 bit virtual addresses, leading to kernel panics and system becomes
unusable.

Change default_kernel_zone_for_pfn() to intersect highmem pfn range, and
calculate the default zone accordingly.

Signed-off-by: Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
---
 mm/memory_hotplug.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

David Hildenbrand June 4, 2020, 2:49 p.m. UTC | #1
On 04.06.20 15:39, Vamshi K Sthambamkadi wrote:
> On x86_32, while onlining highmem sections, the func default_zone_for_pfn()
> defaults target zone to ZONE_NORMAL (movable_node_enabled = 0). Onlining of
> pages is successful, and these highmem pages are moved into zone_normal.
> 
> As a consequence, these pages are treated as low mem, and page addresses
> are calculated using lowmem_page_address() which effectively overflows the
> 32 bit virtual addresses, leading to kernel panics and system becomes
> unusable.
> 
> Change default_kernel_zone_for_pfn() to intersect highmem pfn range, and
> calculate the default zone accordingly.

We discussed this recently [1], and decided that we don't really care
about memory hotplug on 32-bit anymore (especially, user space could
still configure a different zone and make things crash). There was a
patch from Michal in [1], looks like it has not been picked up yet.

@Andrew, can we queue Michals patch?

[1] https://lkml.kernel.org/r/20200218100532.GA4151@dhcp22.suse.cz

> 
> Signed-off-by: Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
> ---
>  mm/memory_hotplug.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index c4d5c45..30f101a 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -725,8 +725,13 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
>  {
>  	struct pglist_data *pgdat = NODE_DATA(nid);
>  	int zid;
> +	int nr_zones = ZONE_NORMAL;
>  
> -	for (zid = 0; zid <= ZONE_NORMAL; zid++) {
> +#ifdef CONFIG_HIGHMEM
> +	nr_zones = ZONE_HIGHMEM;
> +#endif
> +
> +	for (zid = 0; zid <= nr_zones; zid++) {
>  		struct zone *zone = &pgdat->node_zones[zid];
>  
>  		if (zone_intersects(zone, start_pfn, nr_pages))
>
diff mbox series

Patch

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c4d5c45..30f101a 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -725,8 +725,13 @@  static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
 {
 	struct pglist_data *pgdat = NODE_DATA(nid);
 	int zid;
+	int nr_zones = ZONE_NORMAL;
 
-	for (zid = 0; zid <= ZONE_NORMAL; zid++) {
+#ifdef CONFIG_HIGHMEM
+	nr_zones = ZONE_HIGHMEM;
+#endif
+
+	for (zid = 0; zid <= nr_zones; zid++) {
 		struct zone *zone = &pgdat->node_zones[zid];
 
 		if (zone_intersects(zone, start_pfn, nr_pages))