diff mbox

[v5,16/23] x86/mm: add pv prefix to {alloc, free}_page_type

Message ID 20170914125852.22129-17-wei.liu2@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wei Liu Sept. 14, 2017, 12:58 p.m. UTC
And move the declarations to pv/mm.h. The code will be moved later.

The stubs contain BUG() because they aren't supposed to be called when
PV is disabled.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain.c       |  2 +-
 xen/arch/x86/mm.c           | 14 +++++++-------
 xen/include/asm-x86/mm.h    |  3 ---
 xen/include/asm-x86/pv/mm.h | 12 ++++++++++++
 4 files changed, 20 insertions(+), 11 deletions(-)

Comments

Jan Beulich Sept. 22, 2017, 1:40 p.m. UTC | #1
>>> On 14.09.17 at 14:58, <wei.liu2@citrix.com> wrote:
> And move the declarations to pv/mm.h. The code will be moved later.
> 
> The stubs contain BUG() because they aren't supposed to be called when
> PV is disabled.

I'd prefer ASSERT_UNREACHABLE() - they return proper errors
after all, and there's no need to bring down a production system.
Additionally could you add (half) a sentence regarding the
PGT_l*_page_table uses outside of PV specific code, which I'm
sure you have verified can't make it into the stubs?

> --- a/xen/include/asm-x86/pv/mm.h
> +++ b/xen/include/asm-x86/pv/mm.h
> @@ -32,6 +32,11 @@ bool pv_map_ldt_shadow_page(unsigned int off);
>  
>  void pv_arch_init_memory(void);
>  
> +int pv_alloc_page_type(struct page_info *page, unsigned long type,
> +                       int preemptible);
> +int pv_free_page_type(struct page_info *page, unsigned long type,
> +                      int preemptible);
> +
>  #else
>  
>  #include <xen/errno.h>
> @@ -51,6 +56,13 @@ static inline bool pv_map_ldt_shadow_page(unsigned int off) { return false; }
>  
>  static inline void pv_arch_init_memory(void) {}
>  
> +static inline int pv_alloc_page_type(struct page_info *page, unsigned long type,
> +                                     int preemptible)
> +{ BUG(); return -EINVAL; }
> +static inline int pv_free_page_type(struct page_info *page, unsigned long type,
> +                                    int preemptible)
> +{ BUG(); return -EINVAL; }

Take the opportunity and make all the "preemptible" parameters bool?

Jan
Wei Liu Sept. 22, 2017, 2:07 p.m. UTC | #2
On Fri, Sep 22, 2017 at 07:40:04AM -0600, Jan Beulich wrote:
> >>> On 14.09.17 at 14:58, <wei.liu2@citrix.com> wrote:
> > And move the declarations to pv/mm.h. The code will be moved later.
> > 
> > The stubs contain BUG() because they aren't supposed to be called when
> > PV is disabled.
> 
> I'd prefer ASSERT_UNREACHABLE() - they return proper errors

ASSERT_UNREACHABLE() -- no problem.

> after all, and there's no need to bring down a production system.
> Additionally could you add (half) a sentence regarding the
> PGT_l*_page_table uses outside of PV specific code, which I'm
> sure you have verified can't make it into the stubs?

At this stage I can only verify it by reading the code. I can't turn off
PV yet.

To allocate a PGT_l*_page_table type page, the guest must explicitly
request such types via PV MMU hypercall; there is no code other than the
PV dom0 builder and p2m_alloc_ptp  in the hypervisor would explicitly
ask for PGT_l*_page_table type pages.

p2m_alloc_table is a bit tricky. I think it can get away with not using
PGT_l*_page_table type pages, but that is work for another day.  Anyway,
currently it frees the pages directly with free_page from different
paging modes, all of which won't go into PV mm code.

So my conclusion by reading the code is non-PV code can't make it to the
stubs

> 
> > --- a/xen/include/asm-x86/pv/mm.h
> > +++ b/xen/include/asm-x86/pv/mm.h
> > @@ -32,6 +32,11 @@ bool pv_map_ldt_shadow_page(unsigned int off);
> >  
> >  void pv_arch_init_memory(void);
> >  
> > +int pv_alloc_page_type(struct page_info *page, unsigned long type,
> > +                       int preemptible);
> > +int pv_free_page_type(struct page_info *page, unsigned long type,
> > +                      int preemptible);
> > +
> >  #else
> >  
> >  #include <xen/errno.h>
> > @@ -51,6 +56,13 @@ static inline bool pv_map_ldt_shadow_page(unsigned int off) { return false; }
> >  
> >  static inline void pv_arch_init_memory(void) {}
> >  
> > +static inline int pv_alloc_page_type(struct page_info *page, unsigned long type,
> > +                                     int preemptible)
> > +{ BUG(); return -EINVAL; }
> > +static inline int pv_free_page_type(struct page_info *page, unsigned long type,
> > +                                    int preemptible)
> > +{ BUG(); return -EINVAL; }
> 
> Take the opportunity and make all the "preemptible" parameters bool?

No problem.
Jan Beulich Sept. 22, 2017, 2:24 p.m. UTC | #3
>>> On 22.09.17 at 16:07, <wei.liu2@citrix.com> wrote:
> On Fri, Sep 22, 2017 at 07:40:04AM -0600, Jan Beulich wrote:
>> Additionally could you add (half) a sentence regarding the
>> PGT_l*_page_table uses outside of PV specific code, which I'm
>> sure you have verified can't make it into the stubs?
> 
> At this stage I can only verify it by reading the code. I can't turn off
> PV yet.

Of course, that's what I did mean.

> To allocate a PGT_l*_page_table type page, the guest must explicitly
> request such types via PV MMU hypercall; there is no code other than the
> PV dom0 builder and p2m_alloc_ptp  in the hypervisor would explicitly
> ask for PGT_l*_page_table type pages.
> 
> p2m_alloc_table is a bit tricky. I think it can get away with not using
> PGT_l*_page_table type pages, but that is work for another day.  Anyway,
> currently it frees the pages directly with free_page from different
> paging modes, all of which won't go into PV mm code.

Right, hence the request to extend the description a little.
shadow_enable() also has a use that's not so obviously not
a problem.

Jan
Wei Liu Sept. 22, 2017, 2:34 p.m. UTC | #4
On Fri, Sep 22, 2017 at 08:24:20AM -0600, Jan Beulich wrote:
> >>> On 22.09.17 at 16:07, <wei.liu2@citrix.com> wrote:
> > On Fri, Sep 22, 2017 at 07:40:04AM -0600, Jan Beulich wrote:
> >> Additionally could you add (half) a sentence regarding the
> >> PGT_l*_page_table uses outside of PV specific code, which I'm
> >> sure you have verified can't make it into the stubs?
> > 
> > At this stage I can only verify it by reading the code. I can't turn off
> > PV yet.
> 
> Of course, that's what I did mean.
> 
> > To allocate a PGT_l*_page_table type page, the guest must explicitly
> > request such types via PV MMU hypercall; there is no code other than the
> > PV dom0 builder and p2m_alloc_ptp  in the hypervisor would explicitly
> > ask for PGT_l*_page_table type pages.
> > 
> > p2m_alloc_table is a bit tricky. I think it can get away with not using
> > PGT_l*_page_table type pages, but that is work for another day.  Anyway,
> > currently it frees the pages directly with free_page from different
> > paging modes, all of which won't go into PV mm code.
> 
> Right, hence the request to extend the description a little.
> shadow_enable() also has a use that's not so obviously not
> a problem.

The call chain is paging_enable -> shadow_enable. paging_enable is only
called by hvm code.

The teardown in shadow_final_teardown is also done with p2m_teardown
which goes to free_page function in respective paging modes.

All in all, it is not a problem in that code path either.
diff mbox

Patch

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index e9367bd8aa..3abd37e4dc 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1812,7 +1812,7 @@  static int relinquish_memory(
             if ( likely(y == x) )
             {
                 /* No need for atomic update of type_info here: noone else updates it. */
-                switch ( ret = free_page_type(page, x, 1) )
+                switch ( ret = pv_free_page_type(page, x, 1) )
                 {
                 case 0:
                     break;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 3a919c19b8..86c7466fa0 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1987,8 +1987,8 @@  static void get_page_light(struct page_info *page)
     while ( unlikely(y != x) );
 }
 
-static int alloc_page_type(struct page_info *page, unsigned long type,
-                           int preemptible)
+int pv_alloc_page_type(struct page_info *page, unsigned long type,
+                       int preemptible)
 {
     struct domain *owner = page_get_owner(page);
     int rc;
@@ -2017,7 +2017,7 @@  static int alloc_page_type(struct page_info *page, unsigned long type,
         rc = alloc_segdesc_page(page);
         break;
     default:
-        printk("Bad type in alloc_page_type %lx t=%" PRtype_info " c=%lx\n",
+        printk("Bad type in %s %lx t=%" PRtype_info " c=%lx\n", __func__,
                type, page->u.inuse.type_info,
                page->count_info);
         rc = -EINVAL;
@@ -2061,8 +2061,8 @@  static int alloc_page_type(struct page_info *page, unsigned long type,
 }
 
 
-int free_page_type(struct page_info *page, unsigned long type,
-                   int preemptible)
+int pv_free_page_type(struct page_info *page, unsigned long type,
+                      int preemptible)
 {
     struct domain *owner = page_get_owner(page);
     unsigned long gmfn;
@@ -2119,7 +2119,7 @@  int free_page_type(struct page_info *page, unsigned long type,
 static int __put_final_page_type(
     struct page_info *page, unsigned long type, int preemptible)
 {
-    int rc = free_page_type(page, type, preemptible);
+    int rc = pv_free_page_type(page, type, preemptible);
 
     /* No need for atomic update of type_info here: noone else updates it. */
     if ( rc == 0 )
@@ -2337,7 +2337,7 @@  static int __get_page_type(struct page_info *page, unsigned long type,
             page->nr_validated_ptes = 0;
             page->partial_pte = 0;
         }
-        rc = alloc_page_type(page, type, preemptible);
+        rc = pv_alloc_page_type(page, type, preemptible);
     }
 
     if ( (x & PGT_partial) && !(nx & PGT_partial) )
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index f2e0f498c4..56b2b94195 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -326,9 +326,6 @@  static inline void *__page_to_virt(const struct page_info *pg)
                     (PAGE_SIZE / (sizeof(*pg) & -sizeof(*pg))));
 }
 
-int free_page_type(struct page_info *page, unsigned long type,
-                   int preemptible);
-
 bool fill_ro_mpt(mfn_t mfn);
 void zap_ro_mpt(mfn_t mfn);
 
diff --git a/xen/include/asm-x86/pv/mm.h b/xen/include/asm-x86/pv/mm.h
index 4944a70c7a..0cd8beec39 100644
--- a/xen/include/asm-x86/pv/mm.h
+++ b/xen/include/asm-x86/pv/mm.h
@@ -32,6 +32,11 @@  bool pv_map_ldt_shadow_page(unsigned int off);
 
 void pv_arch_init_memory(void);
 
+int pv_alloc_page_type(struct page_info *page, unsigned long type,
+                       int preemptible);
+int pv_free_page_type(struct page_info *page, unsigned long type,
+                      int preemptible);
+
 #else
 
 #include <xen/errno.h>
@@ -51,6 +56,13 @@  static inline bool pv_map_ldt_shadow_page(unsigned int off) { return false; }
 
 static inline void pv_arch_init_memory(void) {}
 
+static inline int pv_alloc_page_type(struct page_info *page, unsigned long type,
+                                     int preemptible)
+{ BUG(); return -EINVAL; }
+static inline int pv_free_page_type(struct page_info *page, unsigned long type,
+                                    int preemptible)
+{ BUG(); return -EINVAL; }
+
 #endif
 
 #endif /* __X86_PV_MM_H__ */