diff mbox series

[v2,1/4] xen/arm64: head: Don't map too much in boot_third

Message ID 20230629201129.12934-2-julien@xen.org (mailing list archive)
State New, archived
Headers show
Series xen/arm: Enable USBAN support | expand

Commit Message

Julien Grall June 29, 2023, 8:11 p.m. UTC
From: Julien Grall <jgrall@amazon.com>

At the moment, we are mapping the size of the reserved area for Xen
(i.e. 2MB) even if the binary is smaller. We don't exactly know what's
after Xen, so it is not a good idea to map more than necessary for a
couple of reasons:
    * We would need to use break-before-make if the extra PTE needs to
      be updated to point to another region
    * The extra area mapped may be mapped again by Xen with different
      memory attribute. This would result to attribute mismatch.

Therefore, rework the logic in create_page_tables() to map only what's
necessary. To simplify the logic, we also want to make sure _end
is page-aligned. So align the symbol in the linker and add an assert
to catch any change.

Lastly, take the opportunity to confirm that _start is equal to
XEN_VIRT_START as the assembly is using both interchangeably.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

    Changes in v2:
        - Fix typo and coding style
        - Check _start == XEN_VIRT_START
---
 xen/arch/arm/arm64/head.S | 15 ++++++++++++++-
 xen/arch/arm/xen.lds.S    |  9 +++++++++
 2 files changed, 23 insertions(+), 1 deletion(-)

Comments

Henry Wang June 29, 2023, 10:52 p.m. UTC | #1
Hi Julien,

> -----Original Message-----
> Subject: [v2 1/4] xen/arm64: head: Don't map too much in boot_third
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, we are mapping the size of the reserved area for Xen
> (i.e. 2MB) even if the binary is smaller. We don't exactly know what's
> after Xen, so it is not a good idea to map more than necessary for a
> couple of reasons:
>     * We would need to use break-before-make if the extra PTE needs to
>       be updated to point to another region
>     * The extra area mapped may be mapped again by Xen with different
>       memory attribute. This would result to attribute mismatch.
> 
> Therefore, rework the logic in create_page_tables() to map only what's
> necessary. To simplify the logic, we also want to make sure _end
> is page-aligned. So align the symbol in the linker and add an assert
> to catch any change.
> 
> Lastly, take the opportunity to confirm that _start is equal to
> XEN_VIRT_START as the assembly is using both interchangeably.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry
Michal Orzel June 30, 2023, 6:50 a.m. UTC | #2
On 29/06/2023 22:11, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, we are mapping the size of the reserved area for Xen
> (i.e. 2MB) even if the binary is smaller. We don't exactly know what's
> after Xen, so it is not a good idea to map more than necessary for a
> couple of reasons:
>     * We would need to use break-before-make if the extra PTE needs to
>       be updated to point to another region
>     * The extra area mapped may be mapped again by Xen with different
>       memory attribute. This would result to attribute mismatch.
> 
> Therefore, rework the logic in create_page_tables() to map only what's
> necessary. To simplify the logic, we also want to make sure _end
> is page-aligned. So align the symbol in the linker and add an assert
> to catch any change.
> 
> Lastly, take the opportunity to confirm that _start is equal to
> XEN_VIRT_START as the assembly is using both interchangeably.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
NIT: it looks like other maintainers are not CCed on this series.

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal
Bertrand Marquis July 4, 2023, 2:09 p.m. UTC | #3
Hi Julien,

> On 29 Jun 2023, at 22:11, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, we are mapping the size of the reserved area for Xen
> (i.e. 2MB) even if the binary is smaller. We don't exactly know what's
> after Xen, so it is not a good idea to map more than necessary for a
> couple of reasons:
>    * We would need to use break-before-make if the extra PTE needs to
>      be updated to point to another region
>    * The extra area mapped may be mapped again by Xen with different
>      memory attribute. This would result to attribute mismatch.
> 
> Therefore, rework the logic in create_page_tables() to map only what's
> necessary. To simplify the logic, we also want to make sure _end
> is page-aligned. So align the symbol in the linker and add an assert
> to catch any change.
> 
> Lastly, take the opportunity to confirm that _start is equal to
> XEN_VIRT_START as the assembly is using both interchangeably.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> 
> ---
> 
>    Changes in v2:
>        - Fix typo and coding style
>        - Check _start == XEN_VIRT_START
> ---
> xen/arch/arm/arm64/head.S | 15 ++++++++++++++-
> xen/arch/arm/xen.lds.S    |  9 +++++++++
> 2 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index c0e03755bb10..5e9562a22240 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -572,6 +572,19 @@ create_page_tables:
>         create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3
>         create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3
> 
> +        /*
> +         * Find the size of Xen in pages and multiply by the size of a
> +         * PTE. This will then be compared in the mapping loop below.
> +         *
> +         * Note the multiplication is just to avoid using an extra
> +         * register/instruction per iteration.
> +         */
> +        ldr   x0, =_start            /* x0 := vaddr(_start) */
> +        ldr   x1, =_end              /* x1 := vaddr(_end) */
> +        sub   x0, x1, x0             /* x0 := effective size of Xen */
> +        lsr   x0, x0, #PAGE_SHIFT    /* x0 := Number of pages for Xen */
> +        lsl   x0, x0, #3             /* x0 := Number of pages * PTE size */
> +
>         /* Map Xen */
>         adr_l x4, boot_third
> 
> @@ -585,7 +598,7 @@ create_page_tables:
> 1:      str   x2, [x4, x1]           /* Map vaddr(start) */
>         add   x2, x2, #PAGE_SIZE     /* Next page */
>         add   x1, x1, #8             /* Next slot */
> -        cmp   x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */
> +        cmp   x1, x0                 /* Loop until we map all of Xen */
>         b.lt  1b
> 
>         /*
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index d36b67708ab1..a3c90ca82316 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -212,6 +212,7 @@ SECTIONS
>        . = ALIGN(POINTER_ALIGN);
>        __bss_end = .;
>   } :text
> +  . = ALIGN(PAGE_SIZE);
>   _end = . ;
> 
>   /* Section for the device tree blob (if any). */
> @@ -226,6 +227,12 @@ SECTIONS
>   ELF_DETAILS_SECTIONS
> }
> 
> +/*
> + * The assembly code use _start and XEN_VIRT_START interchangeably to
> + * match the context.
> + */
> +ASSERT(_start == XEN_VIRT_START, "_start != XEN_VIRT_START")
> +
> /*
>  * We require that Xen is loaded at a page boundary, so this ensures that any
>  * code running on the identity map cannot cross a section boundary.
> @@ -241,4 +248,6 @@ ASSERT(IS_ALIGNED(__init_begin,     4), "__init_begin is misaligned")
> ASSERT(IS_ALIGNED(__init_end,       4), "__init_end is misaligned")
> ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is misaligned")
> ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
> +/* To simplify the logic in head.S, we want to _end to be page aligned */
> +ASSERT(IS_ALIGNED(_end,             PAGE_SIZE), "_end is not page aligned")
> ASSERT((_end - _start) <= XEN_VIRT_SIZE, "Xen is too big")
> -- 
> 2.40.1
> 
>
Julien Grall July 4, 2023, 6:25 p.m. UTC | #4
Hi Michal,

On 30/06/2023 07:50, Michal Orzel wrote:
> 
> 
> On 29/06/2023 22:11, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, we are mapping the size of the reserved area for Xen
>> (i.e. 2MB) even if the binary is smaller. We don't exactly know what's
>> after Xen, so it is not a good idea to map more than necessary for a
>> couple of reasons:
>>      * We would need to use break-before-make if the extra PTE needs to
>>        be updated to point to another region
>>      * The extra area mapped may be mapped again by Xen with different
>>        memory attribute. This would result to attribute mismatch.
>>
>> Therefore, rework the logic in create_page_tables() to map only what's
>> necessary. To simplify the logic, we also want to make sure _end
>> is page-aligned. So align the symbol in the linker and add an assert
>> to catch any change.
>>
>> Lastly, take the opportunity to confirm that _start is equal to
>> XEN_VIRT_START as the assembly is using both interchangeably.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> NIT: it looks like other maintainers are not CCed on this series.

Whoops. I forgot to call scripts/add_maintainers.pl. I see that Bertrand 
reviewed it. So I will not resend it.

> 
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Thanks!

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index c0e03755bb10..5e9562a22240 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -572,6 +572,19 @@  create_page_tables:
         create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3
         create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3
 
+        /*
+         * Find the size of Xen in pages and multiply by the size of a
+         * PTE. This will then be compared in the mapping loop below.
+         *
+         * Note the multiplication is just to avoid using an extra
+         * register/instruction per iteration.
+         */
+        ldr   x0, =_start            /* x0 := vaddr(_start) */
+        ldr   x1, =_end              /* x1 := vaddr(_end) */
+        sub   x0, x1, x0             /* x0 := effective size of Xen */
+        lsr   x0, x0, #PAGE_SHIFT    /* x0 := Number of pages for Xen */
+        lsl   x0, x0, #3             /* x0 := Number of pages * PTE size */
+
         /* Map Xen */
         adr_l x4, boot_third
 
@@ -585,7 +598,7 @@  create_page_tables:
 1:      str   x2, [x4, x1]           /* Map vaddr(start) */
         add   x2, x2, #PAGE_SIZE     /* Next page */
         add   x1, x1, #8             /* Next slot */
-        cmp   x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */
+        cmp   x1, x0                 /* Loop until we map all of Xen */
         b.lt  1b
 
         /*
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index d36b67708ab1..a3c90ca82316 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -212,6 +212,7 @@  SECTIONS
        . = ALIGN(POINTER_ALIGN);
        __bss_end = .;
   } :text
+  . = ALIGN(PAGE_SIZE);
   _end = . ;
 
   /* Section for the device tree blob (if any). */
@@ -226,6 +227,12 @@  SECTIONS
   ELF_DETAILS_SECTIONS
 }
 
+/*
+ * The assembly code use _start and XEN_VIRT_START interchangeably to
+ * match the context.
+ */
+ASSERT(_start == XEN_VIRT_START, "_start != XEN_VIRT_START")
+
 /*
  * We require that Xen is loaded at a page boundary, so this ensures that any
  * code running on the identity map cannot cross a section boundary.
@@ -241,4 +248,6 @@  ASSERT(IS_ALIGNED(__init_begin,     4), "__init_begin is misaligned")
 ASSERT(IS_ALIGNED(__init_end,       4), "__init_end is misaligned")
 ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is misaligned")
 ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
+/* To simplify the logic in head.S, we want to _end to be page aligned */
+ASSERT(IS_ALIGNED(_end,             PAGE_SIZE), "_end is not page aligned")
 ASSERT((_end - _start) <= XEN_VIRT_SIZE, "Xen is too big")