diff mbox series

[v4,4/8] mm/memory_hotplug: Poison memmap in remove_pfn_range_from_zone()

Message ID 20190830091428.18399-5-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm/memory_hotplug: Shrink zones before removing memory | expand

Commit Message

David Hildenbrand Aug. 30, 2019, 9:14 a.m. UTC
Let's poison the pages similar to when adding new memory in
sparse_add_section(). Also call remove_pfn_range_from_zone() from
memunmap_pages(), so we can poison the memmap from there as well.

While at it, calculate the pfn in memunmap_pages() only once.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 3 +++
 mm/memremap.c       | 7 ++++---
 2 files changed, 7 insertions(+), 3 deletions(-)

Comments

Aneesh Kumar K.V Sept. 26, 2019, 9:10 a.m. UTC | #1
David Hildenbrand <david@redhat.com> writes:
 @@ -134,11 +134,12 @@ void memunmap_pages(struct dev_pagemap *pgmap)
  
>  	mem_hotplug_begin();
> +	remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn,
> +				   PHYS_PFN(resource_size(res)));

That should be part of PATCH 3?

>  	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
> -		pfn = PHYS_PFN(res->start);
>  		__remove_pages(pfn, PHYS_PFN(resource_size(res)), NULL);
>  	} else {
>  		arch_remove_memory(nid, res->start, resource_size(res),
> -- 
> 2.21.0

-aneesh
David Hildenbrand Sept. 26, 2019, 9:14 a.m. UTC | #2
On 26.09.19 11:10, Aneesh Kumar K.V wrote:
> David Hildenbrand <david@redhat.com> writes:
>  @@ -134,11 +134,12 @@ void memunmap_pages(struct dev_pagemap *pgmap)
>   
>>  	mem_hotplug_begin();
>> +	remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn,
>> +				   PHYS_PFN(resource_size(res)));
> 
> That should be part of PATCH 3?

I thought about that but moved it to #4, because it easily gets lost in
the already-big-enough-patch and is a NOP for ZONE_DEVICE before this
change.

Thanks!
diff mbox series

Patch

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 4da59ec14dbb..5bfca690a922 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -464,6 +464,9 @@  void __ref remove_pfn_range_from_zone(struct zone *zone,
 	struct pglist_data *pgdat = zone->zone_pgdat;
 	unsigned long flags;
 
+	/* Poison struct pages because they are now uninitialized again. */
+	page_init_poison(pfn_to_page(start_pfn), sizeof(struct page) * nr_pages);
+
 	/*
 	 * Zone shrinking code cannot properly deal with ZONE_DEVICE. So
 	 * we will not try to shrink the zones - which is okay as
diff --git a/mm/memremap.c b/mm/memremap.c
index cb90c3e8804a..48f573502f88 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -125,7 +125,7 @@  static void dev_pagemap_cleanup(struct dev_pagemap *pgmap)
 void memunmap_pages(struct dev_pagemap *pgmap)
 {
 	struct resource *res = &pgmap->res;
-	unsigned long pfn;
+	unsigned long pfn = PHYS_PFN(res->start);
 	int nid;
 
 	dev_pagemap_kill(pgmap);
@@ -134,11 +134,12 @@  void memunmap_pages(struct dev_pagemap *pgmap)
 	dev_pagemap_cleanup(pgmap);
 
 	/* pages are dead and unused, undo the arch mapping */
-	nid = page_to_nid(pfn_to_page(PHYS_PFN(res->start)));
+	nid = page_to_nid(pfn_to_page(pfn));
 
 	mem_hotplug_begin();
+	remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn,
+				   PHYS_PFN(resource_size(res)));
 	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
-		pfn = PHYS_PFN(res->start);
 		__remove_pages(pfn, PHYS_PFN(resource_size(res)), NULL);
 	} else {
 		arch_remove_memory(nid, res->start, resource_size(res),