diff mbox series

mm/mm_init.c: add zidcache to the init_reserved_page function

Message ID 20240904115541.6519-1-liuq131@chinatelecom.cn (mailing list archive)
State New
Headers show
Series mm/mm_init.c: add zidcache to the init_reserved_page function | expand

Commit Message

Qiang Liu Sept. 4, 2024, 11:55 a.m. UTC
Each call to the init_reserved_page function will look up the
corresponding zid for the given pfn parameter. Even if subsequent
calls have the same zid for the pfn as the current one, the lookup
will be repeated.

During system initialization, the memmap_init_reserved_pages function
calls init_reserved_page for each contiguous memory region in mem_region.
Implementing a cache for zid can significantly improve performance.
Tests have shown that adding a zid cache reduces the execution time of
the memmap_init_reserved_pages function by over 7%.

Signed-off-by: Qiang Liu <liuq131@chinatelecom.cn>
---
 mm/mm_init.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

Comments

Andrew Morton Sept. 4, 2024, 8:41 p.m. UTC | #1
On Wed,  4 Sep 2024 19:55:41 +0800 Qiang Liu <liuq131@chinatelecom.cn> wrote:

> Each call to the init_reserved_page function will look up the
> corresponding zid for the given pfn parameter. Even if subsequent
> calls have the same zid for the pfn as the current one, the lookup
> will be repeated.
> 
> During system initialization, the memmap_init_reserved_pages function
> calls init_reserved_page for each contiguous memory region in mem_region.
> Implementing a cache for zid can significantly improve performance.
> Tests have shown that adding a zid cache reduces the execution time of
> the memmap_init_reserved_pages function by over 7%.
> 

OK, but how much speedup do we see overall?  In other words, is
memmap_init_reserved_pages() a significant consumer of execution time?

I'd be surprised if it makes much difference at all - MAX_NR_ZONES is a
small number.  Maybe we call init_reserved_page() a lot.


> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -710,19 +710,25 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid)
>  {
>  	pg_data_t *pgdat;
>  	int zid;
> +	struct zone *zone;
> +	static int zidcache;

What locking protects zidcache?  lock_device_hotplug() and/or
__init-time serialization?  This might be worth mentioning?

>  
>  	if (early_page_initialised(pfn, nid))
>  		return;
>  
>  	pgdat = NODE_DATA(nid);
>  
> -	for (zid = 0; zid < MAX_NR_ZONES; zid++) {
> -		struct zone *zone = &pgdat->node_zones[zid];
> +	zone = &pgdat->node_zones[zidcache];

OK, but if init_reserved_page() was previously called against a
different node, `zidcache' will refer to a zone in a different node. 
The code will work OK, but it's worth mentioning somewhere I guess.


> +	if (unlikely(zone_spans_pfn(zone, pfn)))

Isn't this wrong?  We need to redo the search if !zone_spans_pfn(...)?

> +		for (zid = 0; zid < MAX_NR_ZONES; zid++) {
> +			zone = &pgdat->node_zones[zid];
>  
> -		if (zone_spans_pfn(zone, pfn))
> -			break;
> -	}
> -	__init_single_page(pfn_to_page(pfn), pfn, zid, nid);
> +			if (zone_spans_pfn(zone, pfn)) {
> +				zidcache = zid;
> +				break;
> +			}
> +		}
> +	__init_single_page(pfn_to_page(pfn), pfn, zidcache, nid);
>  }
>  #else
>  static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {}
> -- 
> 2.27.0
diff mbox series

Patch

diff --git a/mm/mm_init.c b/mm/mm_init.c
index 51960079875b..2d5b5ffa962b 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -710,19 +710,25 @@  static void __meminit init_reserved_page(unsigned long pfn, int nid)
 {
 	pg_data_t *pgdat;
 	int zid;
+	struct zone *zone;
+	static int zidcache;
 
 	if (early_page_initialised(pfn, nid))
 		return;
 
 	pgdat = NODE_DATA(nid);
 
-	for (zid = 0; zid < MAX_NR_ZONES; zid++) {
-		struct zone *zone = &pgdat->node_zones[zid];
+	zone = &pgdat->node_zones[zidcache];
+	if (unlikely(zone_spans_pfn(zone, pfn)))
+		for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+			zone = &pgdat->node_zones[zid];
 
-		if (zone_spans_pfn(zone, pfn))
-			break;
-	}
-	__init_single_page(pfn_to_page(pfn), pfn, zid, nid);
+			if (zone_spans_pfn(zone, pfn)) {
+				zidcache = zid;
+				break;
+			}
+		}
+	__init_single_page(pfn_to_page(pfn), pfn, zidcache, nid);
 }
 #else
 static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {}