diff mbox series

[12/16] mm/huge_memory: minor cleanup for split_huge_pages_all

Message ID 20220622170627.19786-13-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few cleanup patches for huge_memory | expand

Commit Message

Miaohe Lin June 22, 2022, 5:06 p.m. UTC
There is nothing to do if a zone doesn't have any pages managed by the
buddy allocator. So we should check managed_zone instead. Also if a thp
is found, there's no need to traverse the subpages again.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/huge_memory.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 506e7a682780..0030b4f67cd9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2858,9 +2858,12 @@  static void split_huge_pages_all(void)
 	unsigned long total = 0, split = 0;
 
 	pr_debug("Split all THPs\n");
-	for_each_populated_zone(zone) {
+	for_each_zone(zone) {
+		if (!managed_zone(zone))
+			continue;
 		max_zone_pfn = zone_end_pfn(zone);
 		for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) {
+			int nr_pages;
 			if (!pfn_valid(pfn))
 				continue;
 
@@ -2876,8 +2879,10 @@  static void split_huge_pages_all(void)
 
 			total++;
 			lock_page(page);
+			nr_pages = thp_nr_pages(page);
 			if (!split_huge_page(page))
 				split++;
+			pfn += nr_pages - 1;
 			unlock_page(page);
 next:
 			put_page(page);