diff mbox series

[04/22] xen/numa: vmap the pages for memnodemap

Message ID 20221216114853.8227-5-julien@xen.org (mailing list archive)
State New, archived
Headers show
Series Remove the directmap | expand

Commit Message

Julien Grall Dec. 16, 2022, 11:48 a.m. UTC
From: Hongyan Xia <hongyxia@amazon.com>

This avoids the assumption that there is a direct map and boot pages
fall inside the direct map.

Clean up the variables so that mfn actually stores a type-safe mfn.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

----

    Changes compare to Hongyan's version:
        * The function modified was moved to common code. So rebase it
        * vmap_boot_pages() was renamed to vmap_contig_pages()
---
 xen/common/numa.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Jan Beulich Dec. 20, 2022, 3:25 p.m. UTC | #1
On 16.12.2022 12:48, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> This avoids the assumption that there is a direct map and boot pages
> fall inside the direct map.
> 
> Clean up the variables so that mfn actually stores a type-safe mfn.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
(obviously remains valid across ...

> --- a/xen/common/numa.c
> +++ b/xen/common/numa.c
> @@ -424,13 +424,13 @@ static int __init populate_memnodemap(const struct node *nodes,
>  static int __init allocate_cachealigned_memnodemap(void)
>  {
>      unsigned long size = PFN_UP(memnodemapsize * sizeof(*memnodemap));
> -    unsigned long mfn = mfn_x(alloc_boot_pages(size, 1));
> +    mfn_t mfn = alloc_boot_pages(size, 1);
>  
> -    memnodemap = mfn_to_virt(mfn);
> -    mfn <<= PAGE_SHIFT;
> +    memnodemap = vmap_contig_pages(mfn, size);

... a possible rename of this function)

Jan
diff mbox series

Patch

diff --git a/xen/common/numa.c b/xen/common/numa.c
index 4948b21fbe66..2040b3d974e5 100644
--- a/xen/common/numa.c
+++ b/xen/common/numa.c
@@ -424,13 +424,13 @@  static int __init populate_memnodemap(const struct node *nodes,
 static int __init allocate_cachealigned_memnodemap(void)
 {
     unsigned long size = PFN_UP(memnodemapsize * sizeof(*memnodemap));
-    unsigned long mfn = mfn_x(alloc_boot_pages(size, 1));
+    mfn_t mfn = alloc_boot_pages(size, 1);
 
-    memnodemap = mfn_to_virt(mfn);
-    mfn <<= PAGE_SHIFT;
+    memnodemap = vmap_contig_pages(mfn, size);
+    BUG_ON(!memnodemap);
     size <<= PAGE_SHIFT;
     printk(KERN_DEBUG "NUMA: Allocated memnodemap from %lx - %lx\n",
-           mfn, mfn + size);
+           mfn_to_maddr(mfn), mfn_to_maddr(mfn) + size);
     memnodemapsize = size / sizeof(*memnodemap);
 
     return 0;