diff mbox

[v2] altp2m: Allow the hostp2m to be shared

Message ID 1461772126-10305-1-git-send-email-tamas@tklengyel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tamas K Lengyel April 27, 2016, 3:48 p.m. UTC
Don't propagate altp2m changes from ept_set_entry for memshare as memshare
already has the lock. We call altp2m propagate changes once memshare
successfully finishes. Allow the hostp2m entries to be of type
p2m_ram_shared when applying mem_access. Also, do not trigger PoD for hostp2m
when setting altp2m mem_access to be in-line with non-altp2m mem_access path.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v2: Cosmetic fixes and addressing PoD in the commit message
---
 xen/arch/x86/mm/mem_sharing.c | 10 +++++++++-
 xen/arch/x86/mm/p2m-ept.c     |  2 +-
 xen/arch/x86/mm/p2m.c         |  5 ++---
 3 files changed, 12 insertions(+), 5 deletions(-)

Comments

George Dunlap April 28, 2016, 10:36 a.m. UTC | #1
On 27/04/16 16:48, Tamas K Lengyel wrote:
> Don't propagate altp2m changes from ept_set_entry for memshare as memshare
> already has the lock. We call altp2m propagate changes once memshare
> successfully finishes. Allow the hostp2m entries to be of type
> p2m_ram_shared when applying mem_access. Also, do not trigger PoD for hostp2m
> when setting altp2m mem_access to be in-line with non-altp2m mem_access path.
> 
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> 
> v2: Cosmetic fixes and addressing PoD in the commit message
> ---
>  xen/arch/x86/mm/mem_sharing.c | 10 +++++++++-
>  xen/arch/x86/mm/p2m-ept.c     |  2 +-
>  xen/arch/x86/mm/p2m.c         |  5 ++---
>  3 files changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index a522423..6693661 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -35,6 +35,7 @@
>  #include <asm/p2m.h>
>  #include <asm/atomic.h>
>  #include <asm/event.h>
> +#include <asm/altp2m.h>
>  #include <xsm/xsm.h>
>  
>  #include "mm-locks.h"
> @@ -926,11 +927,12 @@ int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
>      int ret = -EINVAL;
>      mfn_t smfn, cmfn;
>      p2m_type_t smfn_type, cmfn_type;
> +    p2m_access_t cmfn_a;
>      struct two_gfns tg;
>      struct rmap_iterator ri;
>  
>      get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
> -                 cd, cgfn, &cmfn_type, NULL, &cmfn,
> +                 cd, cgfn, &cmfn_type, &cmfn_a, &cmfn,
>                   0, &tg);
>  
>      /* This tricky business is to avoid two callers deadlocking if 
> @@ -1026,6 +1028,12 @@ int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
>      /* We managed to free a domain page. */
>      atomic_dec(&nr_shared_mfns);
>      atomic_inc(&nr_saved_mfns);
> +
> +    /* Save the change to the altp2m tables as well if active */
> +    if ( altp2m_active(cd) )
> +        p2m_altp2m_propagate_change(cd, _gfn(cgfn), smfn, PAGE_ORDER_4K,
> +                                    p2m_ram_shared, cmfn_a);
> +
>      ret = 0;
>      
>  err_out:
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 1ed5b47..ff84242 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -847,7 +847,7 @@ out:
>      if ( is_epte_present(&old_entry) )
>          ept_free_entry(p2m, &old_entry, target);
>  
> -    if ( rc == 0 && p2m_is_hostp2m(p2m) )
> +    if ( rc == 0 && p2m_is_hostp2m(p2m) && p2mt != p2m_ram_shared )
>          p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);

Sorry, just double-checking here: This is the right thing to do because
the only code that is *setting* a page p2m_ram_shared should be the
sharing code, which (after this patch) should be propagating the change
itself?

I'm just conscious that this introduces an invariant that would be easy
to accidentally break.

At the moment the only place that calls set_entry() with p2m_ram_shared
is in p2m.c:set_shared_p2m_entry(), which is called from
mem_sharing.c:__mem_sharing_unshare_page() and
mem_sharing.c:mem_sharing_share_pages().  You've handled the second case
by adding the if() clause at the end of the operation; and it looks like
the first case is  handled because the shared page will be overwritten
by the p2m_change_type_one().

I think at very least adding a comment here saying something like the
following would be helpful:

"Memsharing is responsible for updating altp2ms after the sharing
operation is complete"

And perhaps a note before the p2m_change_type_one() in
__mem_sharing_unshare_page() saying something like:

"This will take care of updating the altp2m mappings."

What do you think?

 -George
Tamas K Lengyel April 28, 2016, 2:49 p.m. UTC | #2
On Thu, Apr 28, 2016 at 4:36 AM, George Dunlap <george.dunlap@citrix.com>
wrote:

> On 27/04/16 16:48, Tamas K Lengyel wrote:
> > Don't propagate altp2m changes from ept_set_entry for memshare as
> memshare
> > already has the lock. We call altp2m propagate changes once memshare
> > successfully finishes. Allow the hostp2m entries to be of type
> > p2m_ram_shared when applying mem_access. Also, do not trigger PoD for
> hostp2m
> > when setting altp2m mem_access to be in-line with non-altp2m mem_access
> path.
> >
> > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> > ---
> > Cc: George Dunlap <george.dunlap@eu.citrix.com>
> > Cc: Keir Fraser <keir@xen.org>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: Jun Nakajima <jun.nakajima@intel.com>
> > Cc: Kevin Tian <kevin.tian@intel.com>
> >
> > v2: Cosmetic fixes and addressing PoD in the commit message
> > ---
> >  xen/arch/x86/mm/mem_sharing.c | 10 +++++++++-
> >  xen/arch/x86/mm/p2m-ept.c     |  2 +-
> >  xen/arch/x86/mm/p2m.c         |  5 ++---
> >  3 files changed, 12 insertions(+), 5 deletions(-)
> >
> > diff --git a/xen/arch/x86/mm/mem_sharing.c
> b/xen/arch/x86/mm/mem_sharing.c
> > index a522423..6693661 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -35,6 +35,7 @@
> >  #include <asm/p2m.h>
> >  #include <asm/atomic.h>
> >  #include <asm/event.h>
> > +#include <asm/altp2m.h>
> >  #include <xsm/xsm.h>
> >
> >  #include "mm-locks.h"
> > @@ -926,11 +927,12 @@ int mem_sharing_share_pages(struct domain *sd,
> unsigned long sgfn, shr_handle_t
> >      int ret = -EINVAL;
> >      mfn_t smfn, cmfn;
> >      p2m_type_t smfn_type, cmfn_type;
> > +    p2m_access_t cmfn_a;
> >      struct two_gfns tg;
> >      struct rmap_iterator ri;
> >
> >      get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
> > -                 cd, cgfn, &cmfn_type, NULL, &cmfn,
> > +                 cd, cgfn, &cmfn_type, &cmfn_a, &cmfn,
> >                   0, &tg);
> >
> >      /* This tricky business is to avoid two callers deadlocking if
> > @@ -1026,6 +1028,12 @@ int mem_sharing_share_pages(struct domain *sd,
> unsigned long sgfn, shr_handle_t
> >      /* We managed to free a domain page. */
> >      atomic_dec(&nr_shared_mfns);
> >      atomic_inc(&nr_saved_mfns);
> > +
> > +    /* Save the change to the altp2m tables as well if active */
> > +    if ( altp2m_active(cd) )
> > +        p2m_altp2m_propagate_change(cd, _gfn(cgfn), smfn, PAGE_ORDER_4K,
> > +                                    p2m_ram_shared, cmfn_a);
> > +
> >      ret = 0;
> >
> >  err_out:
> > diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> > index 1ed5b47..ff84242 100644
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -847,7 +847,7 @@ out:
> >      if ( is_epte_present(&old_entry) )
> >          ept_free_entry(p2m, &old_entry, target);
> >
> > -    if ( rc == 0 && p2m_is_hostp2m(p2m) )
> > +    if ( rc == 0 && p2m_is_hostp2m(p2m) && p2mt != p2m_ram_shared )
> >          p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt,
> p2ma);
>
> Sorry, just double-checking here: This is the right thing to do because
> the only code that is *setting* a page p2m_ram_shared should be the
> sharing code, which (after this patch) should be propagating the change
> itself?
>

The problem is with the mm-lock really. The mem_sharing paths already apply
mem_sharing_page_lock when changing the type to p2m_ram_shared, which
interferes with altp2m_list_lock. It would be fine to propagate the changes
to the altp2m tables otherwise if we could make the locks not interfere,
but that would require more changes then this patch.


>
> I'm just conscious that this introduces an invariant that would be easy
> to accidentally break.
>
> At the moment the only place that calls set_entry() with p2m_ram_shared
> is in p2m.c:set_shared_p2m_entry(), which is called from
> mem_sharing.c:__mem_sharing_unshare_page() and
> mem_sharing.c:mem_sharing_share_pages().  You've handled the second case
> by adding the if() clause at the end of the operation; and it looks like
> the first case is  handled because the shared page will be overwritten
> by the p2m_change_type_one().
>

Right, I don't really see much point in setting the pages shared (again)
right before unsharing overwrites it so there is no altp2m propagation
there.. Then p2m_change_type_one is outside the mem_sharing_page_lock so
there wasn't a need for an extra altp2m check.


>
> I think at very least adding a comment here saying something like the
> following would be helpful:
>
> "Memsharing is responsible for updating altp2ms after the sharing
> operation is complete"
>
> And perhaps a note before the p2m_change_type_one() in
> __mem_sharing_unshare_page() saying something like:
>
> "This will take care of updating the altp2m mappings."
>
> What do you think?
>

That sounds all good to me. The mem_sharing code is undocumented enough
that IMHO adding more comments would help with making it more maintainable
going forward =)

Thanks,
Tamas
diff mbox

Patch

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index a522423..6693661 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -35,6 +35,7 @@ 
 #include <asm/p2m.h>
 #include <asm/atomic.h>
 #include <asm/event.h>
+#include <asm/altp2m.h>
 #include <xsm/xsm.h>
 
 #include "mm-locks.h"
@@ -926,11 +927,12 @@  int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
     int ret = -EINVAL;
     mfn_t smfn, cmfn;
     p2m_type_t smfn_type, cmfn_type;
+    p2m_access_t cmfn_a;
     struct two_gfns tg;
     struct rmap_iterator ri;
 
     get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
-                 cd, cgfn, &cmfn_type, NULL, &cmfn,
+                 cd, cgfn, &cmfn_type, &cmfn_a, &cmfn,
                  0, &tg);
 
     /* This tricky business is to avoid two callers deadlocking if 
@@ -1026,6 +1028,12 @@  int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
     /* We managed to free a domain page. */
     atomic_dec(&nr_shared_mfns);
     atomic_inc(&nr_saved_mfns);
+
+    /* Save the change to the altp2m tables as well if active */
+    if ( altp2m_active(cd) )
+        p2m_altp2m_propagate_change(cd, _gfn(cgfn), smfn, PAGE_ORDER_4K,
+                                    p2m_ram_shared, cmfn_a);
+
     ret = 0;
     
 err_out:
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ed5b47..ff84242 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -847,7 +847,7 @@  out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    if ( rc == 0 && p2m_is_hostp2m(p2m) )
+    if ( rc == 0 && p2m_is_hostp2m(p2m) && p2mt != p2m_ram_shared )
         p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);
 
     return rc;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6eef2f3..2a42b91 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1765,11 +1765,10 @@  int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
     /* Check host p2m if no valid entry in alternate */
     if ( !mfn_valid(mfn) )
     {
-        mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a,
-                              P2M_ALLOC | P2M_UNSHARE, &page_order, NULL);
+        mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a, 0, &page_order, NULL);
 
         rc = -ESRCH;
-        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
+        if ( !mfn_valid(mfn) || (t != p2m_ram_rw && t != p2m_ram_shared) )
             return rc;
 
         /* If this is a superpage, copy that first */