diff mbox series

[MM-PART3,v2,09/12] xen/arm: mm: Use {, un}map_domain_page() to map/unmap Xen page-tables

Message ID 20190514123125.29086-10-julien.grall@arm.com (mailing list archive)
State Superseded
Headers show
Series xen/arm: Provide a generic function to update Xen PT | expand

Commit Message

Julien Grall May 14, 2019, 12:31 p.m. UTC
Currently, the virtual address of the 3rd level page-tables is obtained
using mfn_to_virt().

On Arm32, mfn_to_virt can only work on xenheap page. While in practice
all the page-tables updated will reside in xenheap, in practive the
page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.

Furthermore, a follow-up change will update xen_pt_update_entry() to
walk all the levels and therefore be more generic. Some of the
page-tables will also part of Xen memory and therefore will not be
reachable using mfn_to_virt().

The easiest way to reach those pages is to use {, un}map_domain_page().
While on arm32 this means an extra mapping in the normal cases, this is not
very important as xen page-tables are not updated often.

In order to allow future change in the way Xen page-tables are mapped,
two new helpers are introduced to map/unmap the page-tables.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

---

    Changes in v2:
        - Add Andrii's reviewed-by
---
 xen/arch/arm/mm.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

Comments

Stefano Stabellini June 12, 2019, 10:25 p.m. UTC | #1
On Tue, 14 May 2019, Julien Grall wrote:
> Currently, the virtual address of the 3rd level page-tables is obtained
> using mfn_to_virt().
> 
> On Arm32, mfn_to_virt can only work on xenheap page. While in practice
> all the page-tables updated will reside in xenheap, in practive the
                                                        ^ in theory ?


> page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.
> 
> Furthermore, a follow-up change will update xen_pt_update_entry() to
> walk all the levels and therefore be more generic. Some of the
> page-tables will also part of Xen memory and therefore will not be
> reachable using mfn_to_virt().
> 
> The easiest way to reach those pages is to use {, un}map_domain_page().
> While on arm32 this means an extra mapping in the normal cases, this is not
> very important as xen page-tables are not updated often.
> 
> In order to allow future change in the way Xen page-tables are mapped,
> two new helpers are introduced to map/unmap the page-tables.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>

aside from the typo above:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
>     Changes in v2:
>         - Add Andrii's reviewed-by
> ---
>  xen/arch/arm/mm.c | 26 ++++++++++++++++++++++----
>  1 file changed, 22 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 651e296041..f5979f549b 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -974,6 +974,16 @@ static int create_xen_table(lpae_t *entry)
>      return 0;
>  }
>  
> +static lpae_t *xen_map_table(mfn_t mfn)
> +{
> +    return map_domain_page(mfn);
> +}
> +
> +static void xen_unmap_table(const lpae_t *table)
> +{
> +    unmap_domain_page(table);
> +}
> +
>  /* Sanity check of the entry */
>  static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>  {
> @@ -1036,6 +1046,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>  static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
>                                 unsigned int flags)
>  {
> +    int rc;
>      lpae_t pte, *entry;
>      lpae_t *third = NULL;
>  
> @@ -1054,15 +1065,17 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
>  
>      BUG_ON(!lpae_is_valid(*entry));
>  
> -    third = mfn_to_virt(lpae_get_mfn(*entry));
> +    third = xen_map_table(lpae_get_mfn(*entry));
>      entry = &third[third_table_offset(addr)];
>  
> +    rc = -EINVAL;
>      if ( !xen_pt_check_entry(*entry, mfn, flags) )
> -        return -EINVAL;
> +        goto out;
>  
>      /* If we are only populating page-table, then we are done. */
> +    rc = 0;
>      if ( flags & _PAGE_POPULATE )
> -        return 0;
> +        goto out;
>  
>      /* We are removing the page */
>      if ( !(flags & _PAGE_PRESENT) )
> @@ -1087,7 +1100,12 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
>  
>      write_pte(entry, pte);
>  
> -    return 0;
> +    rc = 0;
> +
> +out:
> +    xen_unmap_table(third);
> +
> +    return rc;
>  }
>  
>  static DEFINE_SPINLOCK(xen_pt_lock);
> -- 
> 2.11.0
>
Julien Grall June 13, 2019, 8:07 a.m. UTC | #2
Hi Stefano,

On 6/12/19 11:25 PM, Stefano Stabellini wrote:
> On Tue, 14 May 2019, Julien Grall wrote:
>> Currently, the virtual address of the 3rd level page-tables is obtained
>> using mfn_to_virt().
>>
>> On Arm32, mfn_to_virt can only work on xenheap page. While in practice
>> all the page-tables updated will reside in xenheap, in practive the
>                                                          ^ in theory ?

The first one should to be "theory" and the second "practice". Because 
some of the bootstrap page-tables (e.g xen_fixmap/xen_mapping) are part 
of Xen binary.

> 
> 
>> page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.
>>
>> Furthermore, a follow-up change will update xen_pt_update_entry() to
>> walk all the levels and therefore be more generic. Some of the
>> page-tables will also part of Xen memory and therefore will not be
>> reachable using mfn_to_virt().
>>
>> The easiest way to reach those pages is to use {, un}map_domain_page().
>> While on arm32 this means an extra mapping in the normal cases, this is not
>> very important as xen page-tables are not updated often.
>>
>> In order to allow future change in the way Xen page-tables are mapped,
>> two new helpers are introduced to map/unmap the page-tables.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> 
> aside from the typo above:

Let me know if my suggestion makes sense above.

> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you.

Cheers,
Stefano Stabellini June 13, 2019, 5:55 p.m. UTC | #3
On Thu, 13 Jun 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 6/12/19 11:25 PM, Stefano Stabellini wrote:
> > On Tue, 14 May 2019, Julien Grall wrote:
> > > Currently, the virtual address of the 3rd level page-tables is obtained
> > > using mfn_to_virt().
> > > 
> > > On Arm32, mfn_to_virt can only work on xenheap page. While in practice
> > > all the page-tables updated will reside in xenheap, in practive the
> >                                                          ^ in theory ?
> 
> The first one should to be "theory" and the second "practice". Because some of
> the bootstrap page-tables (e.g xen_fixmap/xen_mapping) are part of Xen binary.
> 
> > 
> > 
> > > page-tables covering Xen memory (e.g xen_mapping) is part of Xen binary.
> > > 
> > > Furthermore, a follow-up change will update xen_pt_update_entry() to
> > > walk all the levels and therefore be more generic. Some of the
> > > page-tables will also part of Xen memory and therefore will not be
> > > reachable using mfn_to_virt().
> > > 
> > > The easiest way to reach those pages is to use {, un}map_domain_page().
> > > While on arm32 this means an extra mapping in the normal cases, this is
> > > not
> > > very important as xen page-tables are not updated often.
> > > 
> > > In order to allow future change in the way Xen page-tables are mapped,
> > > two new helpers are introduced to map/unmap the page-tables.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@arm.com>
> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> > 
> > aside from the typo above:
> 
> Let me know if my suggestion makes sense above.

Yes, fine


> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Thank you.
> 
> Cheers,
> 
> -- 
> Julien Grall
>
diff mbox series

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 651e296041..f5979f549b 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -974,6 +974,16 @@  static int create_xen_table(lpae_t *entry)
     return 0;
 }
 
+static lpae_t *xen_map_table(mfn_t mfn)
+{
+    return map_domain_page(mfn);
+}
+
+static void xen_unmap_table(const lpae_t *table)
+{
+    unmap_domain_page(table);
+}
+
 /* Sanity check of the entry */
 static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
 {
@@ -1036,6 +1046,7 @@  static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
 static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
                                unsigned int flags)
 {
+    int rc;
     lpae_t pte, *entry;
     lpae_t *third = NULL;
 
@@ -1054,15 +1065,17 @@  static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
 
     BUG_ON(!lpae_is_valid(*entry));
 
-    third = mfn_to_virt(lpae_get_mfn(*entry));
+    third = xen_map_table(lpae_get_mfn(*entry));
     entry = &third[third_table_offset(addr)];
 
+    rc = -EINVAL;
     if ( !xen_pt_check_entry(*entry, mfn, flags) )
-        return -EINVAL;
+        goto out;
 
     /* If we are only populating page-table, then we are done. */
+    rc = 0;
     if ( flags & _PAGE_POPULATE )
-        return 0;
+        goto out;
 
     /* We are removing the page */
     if ( !(flags & _PAGE_PRESENT) )
@@ -1087,7 +1100,12 @@  static int xen_pt_update_entry(unsigned long addr, mfn_t mfn,
 
     write_pte(entry, pte);
 
-    return 0;
+    rc = 0;
+
+out:
+    xen_unmap_table(third);
+
+    return rc;
 }
 
 static DEFINE_SPINLOCK(xen_pt_lock);