diff mbox series

[for-4.13,v3] xen/arm: mm: Allow generic xen page-tables helpers to be called early

Message ID 20191015191610.28680-1-julien.grall@arm.com (mailing list archive)
State New, archived
Headers show
Series [for-4.13,v3] xen/arm: mm: Allow generic xen page-tables helpers to be called early | expand

Commit Message

Julien Grall Oct. 15, 2019, 7:16 p.m. UTC
The current implementations of xen_{map, unmap}_table() expect
{map, unmap}_domain_page() to be usable. Those helpers are used to
map/unmap page tables while update Xen page-tables.

Since commit 022387ee1a "xen/arm: mm: Don't open-code Xen PT update in
{set, clear}_fixmap()", setup_fixmap() will make use of the helpers
mentioned above. When booting Xen using GRUB, setup_fixmap() may be used
before map_domain_page() can be called. This will result to data abort:

(XEN) Data Abort Trap. Syndrome=0x5
(XEN) CPU0: Unexpected Trap: Data Abort

[...]

(XEN) Xen call trace:
(XEN)    [<000000000025ab6c>] mm.c#xen_pt_update+0x2b4/0x59c (PC)
(XEN)    [<000000000025ab20>] mm.c#xen_pt_update+0x268/0x59c (LR)
(XEN)    [<000000000025ae70>] set_fixmap+0x1c/0x2c
(XEN)    [<00000000002a9c98>] copy_from_paddr+0x7c/0xdc
(XEN)    [<00000000002a4ae0>] has_xsm_magic+0x18/0x34
(XEN)    [<00000000002a5b5c>] bootfdt.c#early_scan_node+0x398/0x560
(XEN)    [<00000000002a5de0>] device_tree_for_each_node+0xbc/0x144
(XEN)    [<00000000002a5ed4>] boot_fdt_info+0x6c/0x260
(XEN)    [<00000000002ac0d0>] start_xen+0x108/0xc74
(XEN)    [<000000000020044c>] arm64/head.o#paging+0x60/0x88

During early boot, the page tables are either statically allocated in
Xen binary or allocated via alloc_boot_pages().

For statically allocated page-tables, they will already be mapped as
part of Xen binary. So we can easily find the virtual address.

For dynamically allocated page-tables, we need to rely
map_domain_page() to be functionally working.

For arm32, the call will be usable much before page can be dynamically
allocated (see setup_pagetables()). For arm64, the call will be usable
after setup_xenheap_mappings().

In both cases, memory are given to the boot allocator afterwards. So we
can rely on map_domain_page() for mapping page tables allocated
dynamically.

The helpers xen_{map, unmap}_table() are now updated to take into
account the case where page-tables are part of Xen binary.

Fixes: 022387ee1a ('xen/arm: mm: Don't open-code Xen PT update in {set, clear}_fixmap()')
Signed-off-by: Julien Grall <julien.grall@arm.com>
Release-acked-by: Juergen Gross <jgross@suse.com>

---
    Changes in v3:
        - Use is_xen_fixed_mfn
        - Add a comment about demoting the type.

    Changes in v2:
        - Add RaB Juergen
        - Rework the check to avoid truncation

    Note this patch relies on is_xen_fixed_mfn to be correct (see [1]).

    [1] https://lists.xen.org/archives/html/xen-devel/2019-10/msg01138.html
---
 xen/arch/arm/mm.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

Comments

Stefano Stabellini Oct. 15, 2019, 7:28 p.m. UTC | #1
On Tue, 15 Oct 2019, Julien Grall wrote:
> The current implementations of xen_{map, unmap}_table() expect
> {map, unmap}_domain_page() to be usable. Those helpers are used to
> map/unmap page tables while update Xen page-tables.
> 
> Since commit 022387ee1a "xen/arm: mm: Don't open-code Xen PT update in
> {set, clear}_fixmap()", setup_fixmap() will make use of the helpers
> mentioned above. When booting Xen using GRUB, setup_fixmap() may be used
> before map_domain_page() can be called. This will result to data abort:
> 
> (XEN) Data Abort Trap. Syndrome=0x5
> (XEN) CPU0: Unexpected Trap: Data Abort
> 
> [...]
> 
> (XEN) Xen call trace:
> (XEN)    [<000000000025ab6c>] mm.c#xen_pt_update+0x2b4/0x59c (PC)
> (XEN)    [<000000000025ab20>] mm.c#xen_pt_update+0x268/0x59c (LR)
> (XEN)    [<000000000025ae70>] set_fixmap+0x1c/0x2c
> (XEN)    [<00000000002a9c98>] copy_from_paddr+0x7c/0xdc
> (XEN)    [<00000000002a4ae0>] has_xsm_magic+0x18/0x34
> (XEN)    [<00000000002a5b5c>] bootfdt.c#early_scan_node+0x398/0x560
> (XEN)    [<00000000002a5de0>] device_tree_for_each_node+0xbc/0x144
> (XEN)    [<00000000002a5ed4>] boot_fdt_info+0x6c/0x260
> (XEN)    [<00000000002ac0d0>] start_xen+0x108/0xc74
> (XEN)    [<000000000020044c>] arm64/head.o#paging+0x60/0x88
> 
> During early boot, the page tables are either statically allocated in
> Xen binary or allocated via alloc_boot_pages().
> 
> For statically allocated page-tables, they will already be mapped as
> part of Xen binary. So we can easily find the virtual address.
> 
> For dynamically allocated page-tables, we need to rely
> map_domain_page() to be functionally working.
> 
> For arm32, the call will be usable much before page can be dynamically
> allocated (see setup_pagetables()). For arm64, the call will be usable
> after setup_xenheap_mappings().
> 
> In both cases, memory are given to the boot allocator afterwards. So we
> can rely on map_domain_page() for mapping page tables allocated
> dynamically.
> 
> The helpers xen_{map, unmap}_table() are now updated to take into
> account the case where page-tables are part of Xen binary.
> 
> Fixes: 022387ee1a ('xen/arm: mm: Don't open-code Xen PT update in {set, clear}_fixmap()')
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Release-acked-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v3:
>         - Use is_xen_fixed_mfn
>         - Add a comment about demoting the type.
> 
>     Changes in v2:
>         - Add RaB Juergen
>         - Rework the check to avoid truncation
> 
>     Note this patch relies on is_xen_fixed_mfn to be correct (see [1]).
> 
>     [1] https://lists.xen.org/archives/html/xen-devel/2019-10/msg01138.html
> ---
>  xen/arch/arm/mm.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index be23acfe26..4514bc6bb0 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -964,11 +964,41 @@ static int create_xen_table(lpae_t *entry)
>  
>  static lpae_t *xen_map_table(mfn_t mfn)
>  {
> +    /*
> +     * We may require to map the page table before map_domain_page() is
> +     * useable. The requirements here is it must be useable as soon as
> +     * page-tables are allocated dynamically via alloc_boot_pages().
> +     *
> +     * We need to do the check on physical address rather than virtual
> +     * address to avoid truncation on Arm32. Therefore is_kernel() cannot
> +     * be used.
> +     */
> +    if ( system_state == SYS_STATE_early_boot )
> +    {
> +        if ( is_xen_fixed_mfn(mfn) )
> +        {
> +            /*
> +             * It is fine to demote the type because the size of Xen
> +             * will always fit in vaddr_t.
> +             */
> +            vaddr_t offset = mfn_to_maddr(mfn) - virt_to_maddr(&_start);
> +
> +            return (lpae_t *)(XEN_VIRT_START + offset);
> +        }
> +    }
> +
>      return map_domain_page(mfn);
>  }
>  
>  static void xen_unmap_table(const lpae_t *table)
>  {
> +    /*
> +     * During early boot, xen_map_table() will not use map_domain_page()
> +     * for page-tables residing in Xen binary. So skip the unmap part.
> +     */
> +    if ( system_state == SYS_STATE_early_boot && is_kernel(table) )
> +        return;
> +
>      unmap_domain_page(table);
>  }
>  
> -- 
> 2.11.0
>
diff mbox series

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index be23acfe26..4514bc6bb0 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -964,11 +964,41 @@  static int create_xen_table(lpae_t *entry)
 
 static lpae_t *xen_map_table(mfn_t mfn)
 {
+    /*
+     * We may require to map the page table before map_domain_page() is
+     * useable. The requirements here is it must be useable as soon as
+     * page-tables are allocated dynamically via alloc_boot_pages().
+     *
+     * We need to do the check on physical address rather than virtual
+     * address to avoid truncation on Arm32. Therefore is_kernel() cannot
+     * be used.
+     */
+    if ( system_state == SYS_STATE_early_boot )
+    {
+        if ( is_xen_fixed_mfn(mfn) )
+        {
+            /*
+             * It is fine to demote the type because the size of Xen
+             * will always fit in vaddr_t.
+             */
+            vaddr_t offset = mfn_to_maddr(mfn) - virt_to_maddr(&_start);
+
+            return (lpae_t *)(XEN_VIRT_START + offset);
+        }
+    }
+
     return map_domain_page(mfn);
 }
 
 static void xen_unmap_table(const lpae_t *table)
 {
+    /*
+     * During early boot, xen_map_table() will not use map_domain_page()
+     * for page-tables residing in Xen binary. So skip the unmap part.
+     */
+    if ( system_state == SYS_STATE_early_boot && is_kernel(table) )
+        return;
+
     unmap_domain_page(table);
 }