diff mbox series

[v1] xen/riscv: Increase XEN_VIRT_SIZE

Message ID e5fa4219ccf43125e2489cc8c49b4404e6ed22ce.1743434164.git.oleksii.kurochko@gmail.com (mailing list archive)
State New
Headers show
Series [v1] xen/riscv: Increase XEN_VIRT_SIZE | expand

Commit Message

Oleksii Kurochko March 31, 2025, 3:20 p.m. UTC
A randconfig job failed with the following issue:
  riscv64-linux-gnu-ld: Xen too large for early-boot assumptions

The reason is that enabling the UBSAN config increased the size of
the Xen binary.

Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
and GCOV to be enabled together, with some slack for future growth.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/include/asm/config.h |  8 ++++----
 xen/arch/riscv/include/asm/mm.h     | 12 +++++++++---
 xen/arch/riscv/mm.c                 | 11 ++++++-----
 3 files changed, 19 insertions(+), 12 deletions(-)

Comments

Jan Beulich March 31, 2025, 4:14 p.m. UTC | #1
On 31.03.2025 17:20, Oleksii Kurochko wrote:
> A randconfig job failed with the following issue:
>   riscv64-linux-gnu-ld: Xen too large for early-boot assumptions
> 
> The reason is that enabling the UBSAN config increased the size of
> the Xen binary.
> 
> Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
> and GCOV to be enabled together, with some slack for future growth.

At some point you may want to use 2M mappings for .text (rx), .rodata
(r), and .data (rw). Together with .init that would then completely
fill those 8Mb afaict. Hence you may want to go a little further right
away, e.g. to 16Mb.

> --- a/xen/arch/riscv/include/asm/mm.h
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -43,13 +43,19 @@ static inline void *maddr_to_virt(paddr_t ma)
>   */
>  static inline unsigned long virt_to_maddr(unsigned long va)
>  {
> +    const unsigned int vpn1_shift = PAGETABLE_ORDER + PAGE_SHIFT;
> +    const unsigned long va_vpn = va >> vpn1_shift;
> +    const unsigned long xen_virt_starn_vpn =

s/starn/start/ ?

> +        _AC(XEN_VIRT_START, UL) >> vpn1_shift;
> +    const unsigned long xen_virt_end_vpn =
> +        xen_virt_starn_vpn + ((XEN_VIRT_SIZE >> vpn1_shift) - 1);
> +
>      if ((va >= DIRECTMAP_VIRT_START) &&
>          (va <= DIRECTMAP_VIRT_END))
>          return directmapoff_to_maddr(va - directmap_virt_start);
>  
> -    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(2));
> -    ASSERT((va >> (PAGETABLE_ORDER + PAGE_SHIFT)) ==
> -           (_AC(XEN_VIRT_START, UL) >> (PAGETABLE_ORDER + PAGE_SHIFT)));
> +    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(8));

Is it necessary to be != ? Won't > suffice?

> +    ASSERT((va_vpn >= xen_virt_starn_vpn) && (va_vpn <= xen_virt_end_vpn));

Are you sure about <= on the rhs of the && ?

> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -31,20 +31,21 @@ unsigned long __ro_after_init phys_offset; /* = load_start - XEN_VIRT_START */
>  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
>  
>  /*
> - * It is expected that Xen won't be more then 2 MB.
> + * It is expected that Xen won't be more then 8 MB.
>   * The check in xen.lds.S guarantees that.
> - * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
> + * At least 6 page tables (in case of Sv39) are needed to cover 8 MB.
>   * One for each page level table with PAGE_SIZE = 4 Kb.
>   *
> - * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
> + * Four L0 page table can cover 8 MB(512 entries of
> + * one page table * PAGE_SIZE).
>   *
>   * It might be needed one more page table in case when Xen load address
>   * isn't 2 MB aligned.
>   *
> - * CONFIG_PAGING_LEVELS page tables are needed for the identity mapping,
> + * (CONFIG_PAGING_LEVELS + 2) page tables are needed for the identity mapping,
>   * except that the root page table is shared with the initial mapping
>   */
> -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS + 2) * 2 + 1)

I'm in trouble fitting the comment updates with the update of the #define. Why
would more tables be needed for the identity mapping? Why does XEN_VIRT_SIZE
not appear anywhere here?

Jan
Julien Grall March 31, 2025, 4:17 p.m. UTC | #2
Hi Jan,

On 31/03/2025 17:14, Jan Beulich wrote:
> On 31.03.2025 17:20, Oleksii Kurochko wrote:
>> A randconfig job failed with the following issue:
>>    riscv64-linux-gnu-ld: Xen too large for early-boot assumptions
>>
>> The reason is that enabling the UBSAN config increased the size of
>> the Xen binary.
>>
>> Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
>> and GCOV to be enabled together, with some slack for future growth.
> 
> At some point you may want to use 2M mappings for .text (rx), .rodata
> (r), and .data (rw).

OOI, why would we want to switch to 2MB? At least on Arm, Xen is tiny 
enough that it can fit in less than a couple of MB. I would expect the 
same for RISC-V.

Cheers,
Jan Beulich April 1, 2025, 6:24 a.m. UTC | #3
On 31.03.2025 18:17, Julien Grall wrote:
> On 31/03/2025 17:14, Jan Beulich wrote:
>> On 31.03.2025 17:20, Oleksii Kurochko wrote:
>>> A randconfig job failed with the following issue:
>>>    riscv64-linux-gnu-ld: Xen too large for early-boot assumptions
>>>
>>> The reason is that enabling the UBSAN config increased the size of
>>> the Xen binary.
>>>
>>> Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
>>> and GCOV to be enabled together, with some slack for future growth.
>>
>> At some point you may want to use 2M mappings for .text (rx), .rodata
>> (r), and .data (rw).
> 
> OOI, why would we want to switch to 2MB? At least on Arm, Xen is tiny 
> enough that it can fit in less than a couple of MB. I would expect the 
> same for RISC-V.

For TLB efficiency reasons for example. On x86 we switched to using 2Mb
pages quite some time back, just to find that (at least) one of the
bootloaders choked on the then larger binary. Hence we ended up with
the XEN_ALIGN_2M Kconfig symbol plus the unconditional use of 2Mb
mappings for xen.efi. For the original change see cf393624eec3 ("x86:
use 2M superpages for text/data/bss mappings").

Jan
Julien Grall April 1, 2025, 11:59 a.m. UTC | #4
On 01/04/2025 07:24, Jan Beulich wrote:
> On 31.03.2025 18:17, Julien Grall wrote:
>> On 31/03/2025 17:14, Jan Beulich wrote:
>>> On 31.03.2025 17:20, Oleksii Kurochko wrote:
>>>> A randconfig job failed with the following issue:
>>>>     riscv64-linux-gnu-ld: Xen too large for early-boot assumptions
>>>>
>>>> The reason is that enabling the UBSAN config increased the size of
>>>> the Xen binary.
>>>>
>>>> Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
>>>> and GCOV to be enabled together, with some slack for future growth.
>>>
>>> At some point you may want to use 2M mappings for .text (rx), .rodata
>>> (r), and .data (rw).
>>
>> OOI, why would we want to switch to 2MB? At least on Arm, Xen is tiny
>> enough that it can fit in less than a couple of MB. I would expect the
>> same for RISC-V.
> 
> For TLB efficiency reasons for example. On x86 we switched to using 2Mb
> pages quite some time back, just to find that (at least) one of the
> bootloaders choked on the then larger binary. Hence we ended up with
> the XEN_ALIGN_2M Kconfig symbol plus the unconditional use of 2Mb
> mappings for xen.efi. For the original change see cf393624eec3 ("x86:
> use 2M superpages for text/data/bss mappings").

For Arm, we can using the contiguous bit (it allows to combine a few 
entries into one TLB on some CPUs) to reduce the TLB usage. Not sure if 
RISC-V has a similar feature.

Cheers,
Oleksii Kurochko April 1, 2025, 3:46 p.m. UTC | #5
On 4/1/25 1:59 PM, Julien Grall wrote:
>
>
> On 01/04/2025 07:24, Jan Beulich wrote:
>> On 31.03.2025 18:17, Julien Grall wrote:
>>> On 31/03/2025 17:14, Jan Beulich wrote:
>>>> On 31.03.2025 17:20, Oleksii Kurochko wrote:
>>>>> A randconfig job failed with the following issue:
>>>>>     riscv64-linux-gnu-ld: Xen too large for early-boot assumptions
>>>>>
>>>>> The reason is that enabling the UBSAN config increased the size of
>>>>> the Xen binary.
>>>>>
>>>>> Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
>>>>> and GCOV to be enabled together, with some slack for future growth.
>>>>
>>>> At some point you may want to use 2M mappings for .text (rx), .rodata
>>>> (r), and .data (rw).
>>>
>>> OOI, why would we want to switch to 2MB? At least on Arm, Xen is tiny
>>> enough that it can fit in less than a couple of MB. I would expect the
>>> same for RISC-V.
>>
>> For TLB efficiency reasons for example. On x86 we switched to using 2Mb
>> pages quite some time back, just to find that (at least) one of the
>> bootloaders choked on the then larger binary. Hence we ended up with
>> the XEN_ALIGN_2M Kconfig symbol plus the unconditional use of 2Mb
>> mappings for xen.efi. For the original change see cf393624eec3 ("x86:
>> use 2M superpages for text/data/bss mappings").
>
> For Arm, we can using the contiguous bit (it allows to combine a few 
> entries into one TLB on some CPUs) to reduce the TLB usage. Not sure 
> if RISC-V has a similar feature.

Unfortunately, RISC-V doesn't have such option.

~ Oleksii
Oleksii Kurochko April 1, 2025, 3:58 p.m. UTC | #6
On 3/31/25 6:14 PM, Jan Beulich wrote:
> On 31.03.2025 17:20, Oleksii Kurochko wrote:
>> A randconfig job failed with the following issue:
>>    riscv64-linux-gnu-ld: Xen too large for early-boot assumptions
>>
>> The reason is that enabling the UBSAN config increased the size of
>> the Xen binary.
>>
>> Increase XEN_VIRT_SIZE to reserve enough space, allowing both UBSAN
>> and GCOV to be enabled together, with some slack for future growth.
> At some point you may want to use 2M mappings for .text (rx), .rodata
> (r), and .data (rw). Together with .init that would then completely
> fill those 8Mb afaict. Hence you may want to go a little further right
> away, e.g. to 16Mb.

It makes sense to me. I'll update to 16 Mb then right now.

>> +        _AC(XEN_VIRT_START, UL) >> vpn1_shift;
>> +    const unsigned long xen_virt_end_vpn =
>> +        xen_virt_starn_vpn + ((XEN_VIRT_SIZE >> vpn1_shift) - 1);
>> +
>>       if ((va >= DIRECTMAP_VIRT_START) &&
>>           (va <= DIRECTMAP_VIRT_END))
>>           return directmapoff_to_maddr(va - directmap_virt_start);
>>   
>> -    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(2));
>> -    ASSERT((va >> (PAGETABLE_ORDER + PAGE_SHIFT)) ==
>> -           (_AC(XEN_VIRT_START, UL) >> (PAGETABLE_ORDER + PAGE_SHIFT)));
>> +    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(8));
> Is it necessary to be != ? Won't > suffice?

It could be just > MB(2). Or perphaps >=.

>
>> +    ASSERT((va_vpn >= xen_virt_starn_vpn) && (va_vpn <= xen_virt_end_vpn));
> Are you sure about <= on the rhs of the && ?

I am using -1 [ ((XEN_VIRT_SIZE >> vpn1_shift) - 1) ] when calculating the xen_virt_end_vpn to make the range inclusive.
So it should be fine.

>
>> --- a/xen/arch/riscv/mm.c
>> +++ b/xen/arch/riscv/mm.c
>> @@ -31,20 +31,21 @@ unsigned long __ro_after_init phys_offset; /* = load_start - XEN_VIRT_START */
>>   #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
>>   
>>   /*
>> - * It is expected that Xen won't be more then 2 MB.
>> + * It is expected that Xen won't be more then 8 MB.
>>    * The check in xen.lds.S guarantees that.
>> - * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
>> + * At least 6 page tables (in case of Sv39) are needed to cover 8 MB.
>>    * One for each page level table with PAGE_SIZE = 4 Kb.
>>    *
>> - * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
>> + * Four L0 page table can cover 8 MB(512 entries of
>> + * one page table * PAGE_SIZE).
>>    *
>>    * It might be needed one more page table in case when Xen load address
>>    * isn't 2 MB aligned.
>>    *
>> - * CONFIG_PAGING_LEVELS page tables are needed for the identity mapping,
>> + * (CONFIG_PAGING_LEVELS + 2) page tables are needed for the identity mapping,
>>    * except that the root page table is shared with the initial mapping
>>    */
>> -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
>> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS + 2) * 2 + 1)
> I'm in trouble fitting the comment updates with the update of the #define. Why
> would more tables be needed for the identity mapping?

Agree, it isn't needed more tables for the identity mapping.

>   Why does XEN_VIRT_SIZE
> not appear anywhere here?

I just used 8 Mb explicitly in the comment but I think you really asked me about definition
of PGTBL_INITIAL_COUNT where I just explicitly take into account 3 extra pages for L0.
I will update it with using of XEN_VIRT_SIZE to have more generic definition of PGTBL_INITIAL_COUNT.

Thanks

~ Oleksii
Jan Beulich April 1, 2025, 4:04 p.m. UTC | #7
On 01.04.2025 17:58, Oleksii Kurochko wrote:
> On 3/31/25 6:14 PM, Jan Beulich wrote:
>> On 31.03.2025 17:20, Oleksii Kurochko wrote:
>>> +        _AC(XEN_VIRT_START, UL) >> vpn1_shift;
>>> +    const unsigned long xen_virt_end_vpn =
>>> +        xen_virt_starn_vpn + ((XEN_VIRT_SIZE >> vpn1_shift) - 1);
>>> +
>>>       if ((va >= DIRECTMAP_VIRT_START) &&
>>>           (va <= DIRECTMAP_VIRT_END))
>>>           return directmapoff_to_maddr(va - directmap_virt_start);
>>>   
>>> -    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(2));
>>> -    ASSERT((va >> (PAGETABLE_ORDER + PAGE_SHIFT)) ==
>>> -           (_AC(XEN_VIRT_START, UL) >> (PAGETABLE_ORDER + PAGE_SHIFT)));
>>> +    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(8));
>> Is it necessary to be != ? Won't > suffice?
> 
> It could be just > MB(2). Or perphaps >=.

>= would make the build fail, wouldn't it?

>>> +    ASSERT((va_vpn >= xen_virt_starn_vpn) && (va_vpn <= xen_virt_end_vpn));
>> Are you sure about <= on the rhs of the && ?
> 
> I am using -1 [ ((XEN_VIRT_SIZE >> vpn1_shift) - 1) ] when calculating the xen_virt_end_vpn to make the range inclusive.

Oh, indeed, I didn't look there closely enough.

Jan
diff mbox series

Patch

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 7141bd9e46..ec73f436e3 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -41,11 +41,11 @@ 
  * Start addr          | End addr         | Slot       | area description
  * ============================================================================
  *                   .....                 L2 511          Unused
- *  0xffffffffc0a00000  0xffffffffc0bfffff L2 511          Fixmap
+ *  0xffffffffc1000000  0xffffffffc11fffff L2 511          Fixmap
  *                   ..... ( 2 MB gap )
- *  0xffffffffc0400000  0xffffffffc07fffff L2 511          FDT
+ *  0xffffffffc0a00000  0xffffffffc0dfffff L2 511          FDT
  *                   ..... ( 2 MB gap )
- *  0xffffffffc0000000  0xffffffffc01fffff L2 511          Xen
+ *  0xffffffffc0000000  0xffffffffc07fffff L2 511          Xen
  *                   .....                 L2 510          Unused
  *  0x3200000000        0x7f7fffffff       L2 200-509      Direct map
  *                   .....                 L2 199          Unused
@@ -78,7 +78,7 @@ 
 
 #define GAP_SIZE                MB(2)
 
-#define XEN_VIRT_SIZE           MB(2)
+#define XEN_VIRT_SIZE           MB(8)
 
 #define BOOT_FDT_VIRT_START     (XEN_VIRT_START + XEN_VIRT_SIZE + GAP_SIZE)
 #define BOOT_FDT_VIRT_SIZE      MB(4)
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 4035cd400a..822c21e02a 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -43,13 +43,19 @@  static inline void *maddr_to_virt(paddr_t ma)
  */
 static inline unsigned long virt_to_maddr(unsigned long va)
 {
+    const unsigned int vpn1_shift = PAGETABLE_ORDER + PAGE_SHIFT;
+    const unsigned long va_vpn = va >> vpn1_shift;
+    const unsigned long xen_virt_starn_vpn =
+        _AC(XEN_VIRT_START, UL) >> vpn1_shift;
+    const unsigned long xen_virt_end_vpn =
+        xen_virt_starn_vpn + ((XEN_VIRT_SIZE >> vpn1_shift) - 1);
+
     if ((va >= DIRECTMAP_VIRT_START) &&
         (va <= DIRECTMAP_VIRT_END))
         return directmapoff_to_maddr(va - directmap_virt_start);
 
-    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(2));
-    ASSERT((va >> (PAGETABLE_ORDER + PAGE_SHIFT)) ==
-           (_AC(XEN_VIRT_START, UL) >> (PAGETABLE_ORDER + PAGE_SHIFT)));
+    BUILD_BUG_ON(XEN_VIRT_SIZE != MB(8));
+    ASSERT((va_vpn >= xen_virt_starn_vpn) && (va_vpn <= xen_virt_end_vpn));
 
     /* phys_offset = load_start - XEN_VIRT_START */
     return phys_offset + va;
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index f2bf279bac..dfa86738f2 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -31,20 +31,21 @@  unsigned long __ro_after_init phys_offset; /* = load_start - XEN_VIRT_START */
 #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
 
 /*
- * It is expected that Xen won't be more then 2 MB.
+ * It is expected that Xen won't be more then 8 MB.
  * The check in xen.lds.S guarantees that.
- * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
+ * At least 6 page tables (in case of Sv39) are needed to cover 8 MB.
  * One for each page level table with PAGE_SIZE = 4 Kb.
  *
- * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
+ * Four L0 page table can cover 8 MB(512 entries of
+ * one page table * PAGE_SIZE).
  *
  * It might be needed one more page table in case when Xen load address
  * isn't 2 MB aligned.
  *
- * CONFIG_PAGING_LEVELS page tables are needed for the identity mapping,
+ * (CONFIG_PAGING_LEVELS + 2) page tables are needed for the identity mapping,
  * except that the root page table is shared with the initial mapping
  */
-#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS + 2) * 2 + 1)
 
 pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
 stage1_pgtbl_root[PAGETABLE_ENTRIES];