diff mbox series

[v5,10/15] xen/page_alloc: vmap heap nodes when they are outside the direct map

Message ID 20250108151822.16030-11-alejandro.vallejo@cloud.com (mailing list archive)
State New
Headers show
Series Remove the directmap | expand

Commit Message

Alejandro Vallejo Jan. 8, 2025, 3:18 p.m. UTC
From: Hongyan Xia <hongyxia@amazon.com>

When we do not have a direct map, archs_mfn_in_direct_map() will always
return false, thus init_node_heap() will allocate xenheap pages from an
existing node for the metadata of a new node. This means that the
metadata of a new node is in a different node, slowing down heap
allocation.

Since we now have early vmap, vmap the metadata locally in the new node.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Elias El Yandouzi <eliasely@amazon.com>
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v4->v5:
  * Fix bug introduced in v4 by which node metadata would be
    unconditionally mapped at the tail of the heap node.
  * Remove extra space in conditional

v3->v4:
  * Change type of the parameters to paddr_t
  * Use clear_domain_page() instead of open-coding it

v1->v2:
  * vmap_contig_pages() was renamed to vmap_contig()
  * Fix indentation and coding style

Changes from Hongyan's version:
  * arch_mfn_in_direct_map() was renamed to
    arch_mfns_in_direct_map()
  * Use vmap_contig_pages() rather than __vmap(...).
  * Add missing include (xen/vmap.h) so it compiles on Arm
---
 xen/common/page_alloc.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)
diff mbox series

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 1c01332b6cb0..3af86a213c4e 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -139,6 +139,7 @@ 
 #include <xen/softirq.h>
 #include <xen/spinlock.h>
 #include <xen/vm_event.h>
+#include <xen/vmap.h>
 #include <xen/xvmalloc.h>
 
 #include <asm/flushtlb.h>
@@ -615,22 +616,30 @@  static unsigned long init_node_heap(int node, unsigned long mfn,
         needed = 0;
     }
     else if ( *use_tail && nr >= needed &&
-              arch_mfns_in_directmap(mfn + nr - needed, needed) &&
               (!xenheap_bits ||
                !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
-        _heap[node] = mfn_to_virt(mfn + nr - needed);
-        avail[node] = mfn_to_virt(mfn + nr - 1) +
-                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
+        if ( arch_mfns_in_directmap(mfn + nr - needed, needed) )
+            _heap[node] = mfn_to_virt(mfn + nr - needed);
+        else
+            _heap[node] = vmap_contig(_mfn(mfn + nr - needed), needed);
+
+        BUG_ON(!_heap[node]);
+        avail[node] = (void *)(_heap[node]) + (needed << PAGE_SHIFT) -
+                        sizeof(**avail) * NR_ZONES;
     }
     else if ( nr >= needed &&
-              arch_mfns_in_directmap(mfn, needed) &&
               (!xenheap_bits ||
                !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
-        _heap[node] = mfn_to_virt(mfn);
-        avail[node] = mfn_to_virt(mfn + needed - 1) +
-                      PAGE_SIZE - sizeof(**avail) * NR_ZONES;
+        if ( arch_mfns_in_directmap(mfn, needed) )
+            _heap[node] = mfn_to_virt(mfn);
+        else
+            _heap[node] = vmap_contig(_mfn(mfn), needed);
+
+        BUG_ON(!_heap[node]);
+        avail[node] = (void *)(_heap[node]) + (needed << PAGE_SHIFT) -
+                        sizeof(**avail) * NR_ZONES;
         *use_tail = false;
     }
     else if ( get_order_from_bytes(sizeof(**_heap)) ==