diff mbox series

[v6] x86: introduce a new set of APIs to manage Xen page tables

Message ID ad98947f577560d47ea7825deb624149788645d0.1580219401.git.hongyxia@amazon.com (mailing list archive)
State New, archived
Headers show
Series [v6] x86: introduce a new set of APIs to manage Xen page tables | expand

Commit Message

Xia, Hongyan Jan. 28, 2020, 1:50 p.m. UTC
From: Wei Liu <wei.liu2@citrix.com>

We are going to switch to using domheap page for page tables.
A new set of APIs is introduced to allocate and free pages of page
tables based on mfn instead of the xenheap direct map address. The
allocation and deallocation work on mfn_t but not page_info, because
they are required to work even before frame table is set up.

Implement the old functions with the new ones. We will rewrite, site
by site, other mm functions that manipulate page tables to use the new
APIs.

After the allocation, one needs to map and unmap via map_domain_page to
access the PTEs. This does not break xen half way, since the new APIs
still use xenheap pages underneath, and map_domain_page will just use
the directmap for mappings. They will be switched to use domheap and
dynamic mappings when usage of old APIs is eliminated.

No functional change intended in this patch.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

---
Changed since v5:
- sounds like we are happy to use map_domain_page for Xen PTEs. Remove
  map/unmap_xen_pagetable, just use map/unmap_domain_page instead.
- remove redundant logic in free_xen_pagetable.

Changed since v4:
- properly handle INVALID_MFN.
- remove the _new suffix for map/unmap_xen_pagetable because they do not
  have old alternatives.

Changed since v3:
- const qualify unmap_xen_pagetable_new().
- remove redundant parentheses.
---
 xen/arch/x86/mm.c        | 32 +++++++++++++++++++++++++++-----
 xen/include/asm-x86/mm.h |  3 +++
 2 files changed, 30 insertions(+), 5 deletions(-)

Comments

Wei Liu Jan. 28, 2020, 2:21 p.m. UTC | #1
On Tue, Jan 28, 2020 at 01:50:05PM +0000, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> We are going to switch to using domheap page for page tables.
> A new set of APIs is introduced to allocate and free pages of page
> tables based on mfn instead of the xenheap direct map address. The
> allocation and deallocation work on mfn_t but not page_info, because
> they are required to work even before frame table is set up.
> 
> Implement the old functions with the new ones. We will rewrite, site
> by site, other mm functions that manipulate page tables to use the new
> APIs.
> 
> After the allocation, one needs to map and unmap via map_domain_page to
> access the PTEs. This does not break xen half way, since the new APIs
> still use xenheap pages underneath, and map_domain_page will just use
> the directmap for mappings. They will be switched to use domheap and
> dynamic mappings when usage of old APIs is eliminated.
> 
> No functional change intended in this patch.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Wei Liu <wl@xen.org>
Xia, Hongyan Feb. 5, 2020, 4:25 p.m. UTC | #2
Ping.

On Tue, 2020-01-28 at 13:50 +0000, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> We are going to switch to using domheap page for page tables.
> A new set of APIs is introduced to allocate and free pages of page
> tables based on mfn instead of the xenheap direct map address. The
> allocation and deallocation work on mfn_t but not page_info, because
> they are required to work even before frame table is set up.
> 
> Implement the old functions with the new ones. We will rewrite, site
> by site, other mm functions that manipulate page tables to use the
> new
> APIs.
> 
> After the allocation, one needs to map and unmap via map_domain_page
> to
> access the PTEs. This does not break xen half way, since the new APIs
> still use xenheap pages underneath, and map_domain_page will just use
> the directmap for mappings. They will be switched to use domheap and
> dynamic mappings when usage of old APIs is eliminated.
> 
> No functional change intended in this patch.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> Changed since v5:
> - sounds like we are happy to use map_domain_page for Xen PTEs.
> Remove
>   map/unmap_xen_pagetable, just use map/unmap_domain_page instead.
> - remove redundant logic in free_xen_pagetable.
> 
> Changed since v4:
> - properly handle INVALID_MFN.
> - remove the _new suffix for map/unmap_xen_pagetable because they do
> not
>   have old alternatives.
> 
> Changed since v3:
> - const qualify unmap_xen_pagetable_new().
> - remove redundant parentheses.
> ---
>  xen/arch/x86/mm.c        | 32 +++++++++++++++++++++++++++-----
>  xen/include/asm-x86/mm.h |  3 +++
>  2 files changed, 30 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index f50c065af3..fa824d5252 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -119,6 +119,7 @@
>  #include <xen/efi.h>
>  #include <xen/grant_table.h>
>  #include <xen/hypercall.h>
> +#include <xen/mm.h>
>  #include <asm/paging.h>
>  #include <asm/shadow.h>
>  #include <asm/page.h>
> @@ -4947,22 +4948,43 @@ int mmcfg_intercept_write(
>  }
>  
>  void *alloc_xen_pagetable(void)
> +{
> +    mfn_t mfn = alloc_xen_pagetable_new();
> +
> +    return mfn_eq(mfn, INVALID_MFN) ? NULL :
> mfn_to_virt(mfn_x(mfn));
> +}
> +
> +void free_xen_pagetable(void *v)
> +{
> +    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
> +
> +    free_xen_pagetable_new(mfn);
> +}
> +
> +/*
> + * For these PTE APIs, the caller must follow the alloc-map-unmap-
> free
> + * lifecycle, which means explicitly mapping the PTE pages before
> accessing
> + * them. The caller must check whether the allocation has succeeded,
> and only
> + * pass valid MFNs to map_domain_page().
> + */
> +mfn_t alloc_xen_pagetable_new(void)
>  {
>      if ( system_state != SYS_STATE_early_boot )
>      {
>          void *ptr = alloc_xenheap_page();
>  
>          BUG_ON(!hardware_domain && !ptr);
> -        return ptr;
> +        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
>      }
>  
> -    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
> +    return alloc_boot_pages(1, 1);
>  }
>  
> -void free_xen_pagetable(void *v)
> +/* mfn can be INVALID_MFN */
> +void free_xen_pagetable_new(mfn_t mfn)
>  {
> -    if ( system_state != SYS_STATE_early_boot )
> -        free_xenheap_page(v);
> +    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn,
> INVALID_MFN) )
> +        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
>  }
>  
>  static DEFINE_SPINLOCK(map_pgdir_lock);
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index 2ca8882ad0..ac81991e62 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -582,6 +582,9 @@ void *do_page_walk(struct vcpu *v, unsigned long
> addr);
>  /* Allocator functions for Xen pagetables. */
>  void *alloc_xen_pagetable(void);
>  void free_xen_pagetable(void *v);
> +mfn_t alloc_xen_pagetable_new(void);
> +void free_xen_pagetable_new(mfn_t mfn);
> +
>  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
>  
>  int __sync_local_execstate(void);
Jan Beulich Feb. 5, 2020, 4:47 p.m. UTC | #3
On 05.02.2020 17:25, Xia, Hongyan wrote:
> Ping.

Sorry, there's just too much else also needing attention. I'm
doing what I can review-wise, and I assume some others do so,
too. You're very welcome to help with the review load.

Jan

> On Tue, 2020-01-28 at 13:50 +0000, Hongyan Xia wrote:
>> From: Wei Liu <wei.liu2@citrix.com>
>>
>> We are going to switch to using domheap page for page tables.
>> A new set of APIs is introduced to allocate and free pages of page
>> tables based on mfn instead of the xenheap direct map address. The
>> allocation and deallocation work on mfn_t but not page_info, because
>> they are required to work even before frame table is set up.
>>
>> Implement the old functions with the new ones. We will rewrite, site
>> by site, other mm functions that manipulate page tables to use the
>> new
>> APIs.
>>
>> After the allocation, one needs to map and unmap via map_domain_page
>> to
>> access the PTEs. This does not break xen half way, since the new APIs
>> still use xenheap pages underneath, and map_domain_page will just use
>> the directmap for mappings. They will be switched to use domheap and
>> dynamic mappings when usage of old APIs is eliminated.
>>
>> No functional change intended in this patch.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>> Reviewed-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>> Changed since v5:
>> - sounds like we are happy to use map_domain_page for Xen PTEs.
>> Remove
>>   map/unmap_xen_pagetable, just use map/unmap_domain_page instead.
>> - remove redundant logic in free_xen_pagetable.
>>
>> Changed since v4:
>> - properly handle INVALID_MFN.
>> - remove the _new suffix for map/unmap_xen_pagetable because they do
>> not
>>   have old alternatives.
>>
>> Changed since v3:
>> - const qualify unmap_xen_pagetable_new().
>> - remove redundant parentheses.
>> ---
>>  xen/arch/x86/mm.c        | 32 +++++++++++++++++++++++++++-----
>>  xen/include/asm-x86/mm.h |  3 +++
>>  2 files changed, 30 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index f50c065af3..fa824d5252 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -119,6 +119,7 @@
>>  #include <xen/efi.h>
>>  #include <xen/grant_table.h>
>>  #include <xen/hypercall.h>
>> +#include <xen/mm.h>
>>  #include <asm/paging.h>
>>  #include <asm/shadow.h>
>>  #include <asm/page.h>
>> @@ -4947,22 +4948,43 @@ int mmcfg_intercept_write(
>>  }
>>  
>>  void *alloc_xen_pagetable(void)
>> +{
>> +    mfn_t mfn = alloc_xen_pagetable_new();
>> +
>> +    return mfn_eq(mfn, INVALID_MFN) ? NULL :
>> mfn_to_virt(mfn_x(mfn));
>> +}
>> +
>> +void free_xen_pagetable(void *v)
>> +{
>> +    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
>> +
>> +    free_xen_pagetable_new(mfn);
>> +}
>> +
>> +/*
>> + * For these PTE APIs, the caller must follow the alloc-map-unmap-
>> free
>> + * lifecycle, which means explicitly mapping the PTE pages before
>> accessing
>> + * them. The caller must check whether the allocation has succeeded,
>> and only
>> + * pass valid MFNs to map_domain_page().
>> + */
>> +mfn_t alloc_xen_pagetable_new(void)
>>  {
>>      if ( system_state != SYS_STATE_early_boot )
>>      {
>>          void *ptr = alloc_xenheap_page();
>>  
>>          BUG_ON(!hardware_domain && !ptr);
>> -        return ptr;
>> +        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
>>      }
>>  
>> -    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
>> +    return alloc_boot_pages(1, 1);
>>  }
>>  
>> -void free_xen_pagetable(void *v)
>> +/* mfn can be INVALID_MFN */
>> +void free_xen_pagetable_new(mfn_t mfn)
>>  {
>> -    if ( system_state != SYS_STATE_early_boot )
>> -        free_xenheap_page(v);
>> +    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn,
>> INVALID_MFN) )
>> +        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
>>  }
>>  
>>  static DEFINE_SPINLOCK(map_pgdir_lock);
>> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
>> index 2ca8882ad0..ac81991e62 100644
>> --- a/xen/include/asm-x86/mm.h
>> +++ b/xen/include/asm-x86/mm.h
>> @@ -582,6 +582,9 @@ void *do_page_walk(struct vcpu *v, unsigned long
>> addr);
>>  /* Allocator functions for Xen pagetables. */
>>  void *alloc_xen_pagetable(void);
>>  void free_xen_pagetable(void *v);
>> +mfn_t alloc_xen_pagetable_new(void);
>> +void free_xen_pagetable_new(mfn_t mfn);
>> +
>>  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
>>  
>>  int __sync_local_execstate(void);
Julien Grall Feb. 5, 2020, 5:45 p.m. UTC | #4
Hi Jan,

On 05/02/2020 16:47, Jan Beulich wrote:
> On 05.02.2020 17:25, Xia, Hongyan wrote:
>> Ping.
> 
> Sorry, there's just too much else also needing attention. I'm
> doing what I can review-wise, and I assume some others do so,
> too. You're very welcome to help with the review load.

Wei and I already reviewed the patch. So Hongyan is mainly waiting on 
the maintainers (Andrew and you) to give their final ack.

Would you be happy to give your ack based on the reviews from Wei and me?

Cheers,
Jan Beulich Feb. 6, 2020, 8:30 a.m. UTC | #5
On 05.02.2020 18:45, Julien Grall wrote:
> On 05/02/2020 16:47, Jan Beulich wrote:
>> On 05.02.2020 17:25, Xia, Hongyan wrote:
>>> Ping.
>>
>> Sorry, there's just too much else also needing attention. I'm
>> doing what I can review-wise, and I assume some others do so,
>> too. You're very welcome to help with the review load.
> 
> Wei and I already reviewed the patch. So Hongyan is mainly waiting on 
> the maintainers (Andrew and you) to give their final ack.
> 
> Would you be happy to give your ack based on the reviews from Wei and me?

On some changes I would be, but e.g. newly introduced APIs I want
to look at closely myself, and here the more to see whether prior
comments of mine (which iirc I did provide) have been addressed
suitably.

Jan
Jan Beulich Feb. 20, 2020, 12:10 p.m. UTC | #6
On 28.01.2020 15:21, Wei Liu wrote:
> On Tue, Jan 28, 2020 at 01:50:05PM +0000, Hongyan Xia wrote:
>> From: Wei Liu <wei.liu2@citrix.com>
>>
>> We are going to switch to using domheap page for page tables.
>> A new set of APIs is introduced to allocate and free pages of page
>> tables based on mfn instead of the xenheap direct map address. The
>> allocation and deallocation work on mfn_t but not page_info, because
>> they are required to work even before frame table is set up.
>>
>> Implement the old functions with the new ones. We will rewrite, site
>> by site, other mm functions that manipulate page tables to use the new
>> APIs.
>>
>> After the allocation, one needs to map and unmap via map_domain_page to
>> access the PTEs. This does not break xen half way, since the new APIs
>> still use xenheap pages underneath, and map_domain_page will just use
>> the directmap for mappings. They will be switched to use domheap and
>> dynamic mappings when usage of old APIs is eliminated.
>>
>> No functional change intended in this patch.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Wei Liu <wl@xen.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

I'm sorry for the delay.

Jan
Wei Liu Feb. 20, 2020, 12:47 p.m. UTC | #7
On Thu, Feb 20, 2020 at 01:10:56PM +0100, Jan Beulich wrote:
> On 28.01.2020 15:21, Wei Liu wrote:
> > On Tue, Jan 28, 2020 at 01:50:05PM +0000, Hongyan Xia wrote:
> >> From: Wei Liu <wei.liu2@citrix.com>
> >>
> >> We are going to switch to using domheap page for page tables.
> >> A new set of APIs is introduced to allocate and free pages of page
> >> tables based on mfn instead of the xenheap direct map address. The
> >> allocation and deallocation work on mfn_t but not page_info, because
> >> they are required to work even before frame table is set up.
> >>
> >> Implement the old functions with the new ones. We will rewrite, site
> >> by site, other mm functions that manipulate page tables to use the new
> >> APIs.
> >>
> >> After the allocation, one needs to map and unmap via map_domain_page to
> >> access the PTEs. This does not break xen half way, since the new APIs
> >> still use xenheap pages underneath, and map_domain_page will just use
> >> the directmap for mappings. They will be switched to use domheap and
> >> dynamic mappings when usage of old APIs is eliminated.
> >>
> >> No functional change intended in this patch.
> >>
> >> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> >> Reviewed-by: Julien Grall <jgrall@amazon.com>
> > 
> > Reviewed-by: Wei Liu <wl@xen.org>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks. I have pushed this patch to unblock Hongyan.

Wei.
diff mbox series

Patch

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f50c065af3..fa824d5252 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -119,6 +119,7 @@ 
 #include <xen/efi.h>
 #include <xen/grant_table.h>
 #include <xen/hypercall.h>
+#include <xen/mm.h>
 #include <asm/paging.h>
 #include <asm/shadow.h>
 #include <asm/page.h>
@@ -4947,22 +4948,43 @@  int mmcfg_intercept_write(
 }
 
 void *alloc_xen_pagetable(void)
+{
+    mfn_t mfn = alloc_xen_pagetable_new();
+
+    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn));
+}
+
+void free_xen_pagetable(void *v)
+{
+    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
+
+    free_xen_pagetable_new(mfn);
+}
+
+/*
+ * For these PTE APIs, the caller must follow the alloc-map-unmap-free
+ * lifecycle, which means explicitly mapping the PTE pages before accessing
+ * them. The caller must check whether the allocation has succeeded, and only
+ * pass valid MFNs to map_domain_page().
+ */
+mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
         void *ptr = alloc_xenheap_page();
 
         BUG_ON(!hardware_domain && !ptr);
-        return ptr;
+        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
     }
 
-    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
+    return alloc_boot_pages(1, 1);
 }
 
-void free_xen_pagetable(void *v)
+/* mfn can be INVALID_MFN */
+void free_xen_pagetable_new(mfn_t mfn)
 {
-    if ( system_state != SYS_STATE_early_boot )
-        free_xenheap_page(v);
+    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
+        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2ca8882ad0..ac81991e62 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -582,6 +582,9 @@  void *do_page_walk(struct vcpu *v, unsigned long addr);
 /* Allocator functions for Xen pagetables. */
 void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
+mfn_t alloc_xen_pagetable_new(void);
+void free_xen_pagetable_new(mfn_t mfn);
+
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
 int __sync_local_execstate(void);