diff mbox series

[V1] mm: page_alloc: fix missed updates of lowmem_reserve in adjust_managed_page_count

Message ID 20241225021034.45693-1-15645113830zzh@gmail.com (mailing list archive)
State New
Headers show
Series [V1] mm: page_alloc: fix missed updates of lowmem_reserve in adjust_managed_page_count | expand

Commit Message

zihan zhou Dec. 25, 2024, 2:10 a.m. UTC
In the kernel, the zone's lowmem_reserve and _watermark, and the global
variable 'totalreserve_pages' depend on the value of managed_pages,
but after running adjust_managed_page_count, these values didn't updated,
which caused some problems.

For example, in a system with six 1GB large pages, we found that the value
of protection in zoneinfo (zone->lowmem_reserve), is not right.
Its value seems calculated from the initial managed_pages,
but after the managed_pages changed, was not updated. Only after reading
 the file /proc/sys/vm/lowmem_reserve_ratio, updates happen.

read file /proc/sys/vm/lowmem_reserve_ratio:

lowmem_reserve_ratio_sysctl_handler
----setup_per_zone_lowmem_reserve
--------calculate_totalreserve_pages

protection changed after reading file:

[root@test ~]# cat /proc/zoneinfo | grep protection
        protection: (0, 2719, 57360, 0)
        protection: (0, 0, 54640, 0)
        protection: (0, 0, 0, 0)
        protection: (0, 0, 0, 0)
[root@test ~]# cat /proc/sys/vm/lowmem_reserve_ratio
256     256     32      0
[root@test ~]# cat /proc/zoneinfo | grep protection
        protection: (0, 2735, 63524, 0)
        protection: (0, 0, 60788, 0)
        protection: (0, 0, 0, 0)
        protection: (0, 0, 0, 0)

lowmem_reserve increased also makes the totalreserve_pages increased,
which causes a decrease in available memory. The one above is just a
 test machine, and the increase is not significant. On our online machine,
the reserved memory will increase by several GB due to reading this file.
It is clearly unreasonable to cause a sharp drop in available memory just
 by reading a file.

In this patch, we update reserve memory when update managed_pages, The
size of reserved memory becomes stable. But it seems that the _watermark
 should also be updated along with the managed_pages. We have not done
 it because we are unsure if it is reasonable to set the watermark through
 the initial managed_pages. If it is not reasonable, we will propose
 new patch.

Signed-off-by: zihan zhou <15645113830zzh@gmail.com>
Signed-off-by: yaowenchao <yaowenchao@jd.com>
---
 mm/page_alloc.c | 3 +++
 1 file changed, 3 insertions(+)
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b6958333054d..b23e128afbcd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5826,10 +5826,13 @@  __meminit void zone_pcp_init(struct zone *zone)
 			 zone->present_pages, zone_batchsize(zone));
 }
 
+static void setup_per_zone_lowmem_reserve(void);
+
 void adjust_managed_page_count(struct page *page, long count)
 {
 	atomic_long_add(count, &page_zone(page)->managed_pages);
 	totalram_pages_add(count);
+	setup_per_zone_lowmem_reserve();
 }
 EXPORT_SYMBOL(adjust_managed_page_count);