diff mbox series

[MM-PART3,v3,6/9] xen/arm: mm: Use {, un}map_domain_page() to map/unmap Xen page-tables

Message ID 20190614175144.20046-7-julien.grall@arm.com (mailing list archive)
State New, archived
Headers show
Series xen/arm: Provide a generic function to update Xen PT | expand

Commit Message

Julien Grall June 14, 2019, 5:51 p.m. UTC
Currently, the virtual address of the 3rd level page-tables is obtained
using mfn_to_virt().

On Arm32, mfn_to_virt can only work on xenheap page. While in theory
all the page-tables updated will reside in xenheap, in practice the
page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.

Furthermore, a follow-up change will update xen_pt_update_entry() to
walk all the levels and therefore be more generic. Some of the
page-tables will also part of Xen memory and therefore will not be
reachable using mfn_to_virt().

The easiest way to reach those pages is to use {, un}map_domain_page().
While on arm32 this means an extra mapping in the normal cases, this is not
very important as xen page-tables are not updated often.

In order to allow future change in the way Xen page-tables are mapped,
two new helpers are introduced to map/unmap the page-tables.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---
    Changes in v3:
        - Fix typo in the commit message
        - Add Stefano's acked-by

    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

Comments

Stefano Stabellini June 14, 2019, 9 p.m. UTC | #1
On Fri, 14 Jun 2019, Julien Grall wrote:
> Currently, the virtual address of the 3rd level page-tables is obtained
> using mfn_to_virt().
> 
> On Arm32, mfn_to_virt can only work on xenheap page. While in theory
> all the page-tables updated will reside in xenheap, in practice the
> page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.
> 
> Furthermore, a follow-up change will update xen_pt_update_entry() to
> walk all the levels and therefore be more generic. Some of the
> page-tables will also part of Xen memory and therefore will not be
> reachable using mfn_to_virt().
> 
> The easiest way to reach those pages is to use {, un}map_domain_page().
> While on arm32 this means an extra mapping in the normal cases, this is not
> very important as xen page-tables are not updated often.
> 
> In order to allow future change in the way Xen page-tables are mapped,
> two new helpers are introduced to map/unmap the page-tables.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> 
> ---
>     Changes in v3:
>         - Fix typo in the commit message
>         - Add Stefano's acked-by

It didn't stick, so:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

:-)


> 
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c | 26 ++++++++++++++++++++++----
>  1 file changed, 22 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 3f6d0e29d5..c3dd2c08ba 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,6 +947,16 @@ static int create_xen_table(lpae_t *entry)
>      return 0;
>  }
>  
> +static lpae_t *xen_map_table(mfn_t mfn)
> +{
> +    return map_domain_page(mfn);
> +}
> +
> +static void xen_unmap_table(const lpae_t *table)
> +{
> +    unmap_domain_page(table);
> +}
> +
>  /* Sanity check of the entry */
>  static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>  {
> @@ -1016,6 +1026,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>  static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
>                                 unsigned int flags)
>  {
> +    int rc;
>      lpae_t pte, *entry;
>      lpae_t *third = NULL;
>  
> @@ -1034,15 +1045,17 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
>  
>      BUG_ON(!lpae_is_valid(*entry));
>  
> -    third = mfn_to_virt(lpae_get_mfn(*entry));
> +    third = xen_map_table(lpae_get_mfn(*entry));
>      entry = &third[third_table_offset(addr)];
>  
> +    rc = -EINVAL;
>      if ( !xen_pt_check_entry(*entry, mfn, flags) )
> -        return -EINVAL;
> +        goto out;
>  
>      /* If we are only populating page-table, then we are done. */
> +    rc = 0;
>      if ( flags & _PAGE_POPULATE )
> -        return 0;
> +        goto out;
>  
>      /* We are removing the page */
>      if ( !(flags & _PAGE_PRESENT) )
> @@ -1067,7 +1080,12 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
>  
>      write_pte(entry, pte);
>  
> -    return 0;
> +    rc = 0;
> +
> +out:
> +    xen_unmap_table(third);
> +
> +    return rc;
>  }
>  
>  static DEFINE_SPINLOCK(xen_pt_lock);
> -- 
> 2.11.0
>
Julien Grall June 16, 2019, 8:23 p.m. UTC | #2
Hi Stefano,

On 6/14/19 10:00 PM, Stefano Stabellini wrote:
> On Fri, 14 Jun 2019, Julien Grall wrote:
>> Currently, the virtual address of the 3rd level page-tables is obtained
>> using mfn_to_virt().
>>
>> On Arm32, mfn_to_virt can only work on xenheap page. While in theory
>> all the page-tables updated will reside in xenheap, in practice the
>> page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.
>>
>> Furthermore, a follow-up change will update xen_pt_update_entry() to
>> walk all the levels and therefore be more generic. Some of the
>> page-tables will also part of Xen memory and therefore will not be
>> reachable using mfn_to_virt().
>>
>> The easiest way to reach those pages is to use {, un}map_domain_page().
>> While on arm32 this means an extra mapping in the normal cases, this is not
>> very important as xen page-tables are not updated often.
>>
>> In order to allow future change in the way Xen page-tables are mapped,
>> two new helpers are introduced to map/unmap the page-tables.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
>>
>> ---
>>      Changes in v3:
>>          - Fix typo in the commit message
>>          - Add Stefano's acked-by
> 
> It didn't stick, so:
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> :-)

Whoops yes. Thankfully I didn't add an ack by mistake this time :).

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 3f6d0e29d5..c3dd2c08ba 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -947,6 +947,16 @@  static int create_xen_table(lpae_t *entry)
     return 0;
 }
 
+static lpae_t *xen_map_table(mfn_t mfn)
+{
+    return map_domain_page(mfn);
+}
+
+static void xen_unmap_table(const lpae_t *table)
+{
+    unmap_domain_page(table);
+}
+
 /* Sanity check of the entry */
 static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
 {
@@ -1016,6 +1026,7 @@  static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
 static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
                                unsigned int flags)
 {
+    int rc;
     lpae_t pte, *entry;
     lpae_t *third = NULL;
 
@@ -1034,15 +1045,17 @@  static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
 
     BUG_ON(!lpae_is_valid(*entry));
 
-    third = mfn_to_virt(lpae_get_mfn(*entry));
+    third = xen_map_table(lpae_get_mfn(*entry));
     entry = &third[third_table_offset(addr)];
 
+    rc = -EINVAL;
     if ( !xen_pt_check_entry(*entry, mfn, flags) )
-        return -EINVAL;
+        goto out;
 
     /* If we are only populating page-table, then we are done. */
+    rc = 0;
     if ( flags & _PAGE_POPULATE )
-        return 0;
+        goto out;
 
     /* We are removing the page */
     if ( !(flags & _PAGE_PRESENT) )
@@ -1067,7 +1080,12 @@  static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
 
     write_pte(entry, pte);
 
-    return 0;
+    rc = 0;
+
+out:
+    xen_unmap_table(third);
+
+    return rc;
 }
 
 static DEFINE_SPINLOCK(xen_pt_lock);