diff mbox

[v6] altp2m: Allow shared entries to be copied to altp2m views during lazycopy

Message ID 1469471587-12921-1-git-send-email-tamas.lengyel@zentific.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tamas Lengyel July 25, 2016, 6:33 p.m. UTC
Move sharing locks above altp2m to avoid locking order violation and crashing
the hypervisor during unsharing operations when altp2m is active.

Applying mem_access settings or remapping gfns in altp2m views will
automatically unshare the page if it was shared previously and for this we use
get_entry() wrappers to properly initiate unsharing. Also, disallow nominating
pages for which there are pre-existing altp2m mem_access settings or remappings
present. However, allow altp2m to populate altp2m views with shared entries
during lazycopy as unsharing will automatically propagate the change to these
entries in altp2m views as well.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v6: Lock the altp2m list when checking during nomination
    Update commit message
---
 xen/arch/x86/mm/mem_sharing.c | 32 +++++++++++++++++++++++++++++++-
 xen/arch/x86/mm/mm-locks.h    | 30 +++++++++++++++---------------
 xen/arch/x86/mm/p2m.c         | 12 +++++++-----
 3 files changed, 53 insertions(+), 21 deletions(-)

Comments

George Dunlap July 26, 2016, 9:35 a.m. UTC | #1
On Mon, Jul 25, 2016 at 7:33 PM, Tamas K Lengyel
<tamas.lengyel@zentific.com> wrote:
> Move sharing locks above altp2m to avoid locking order violation and crashing
> the hypervisor during unsharing operations when altp2m is active.
>
> Applying mem_access settings or remapping gfns in altp2m views will
> automatically unshare the page if it was shared previously and for this we use
> get_entry() wrappers to properly initiate unsharing. Also, disallow nominating
> pages for which there are pre-existing altp2m mem_access settings or remappings
> present. However, allow altp2m to populate altp2m views with shared entries
> during lazycopy as unsharing will automatically propagate the change to these
> entries in altp2m views as well.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

If you're OK with it, I hope you don't mind if I change the changelog
text to the following:

---
altp2m: Allow shared entries to be copied to altp2m views during lazycopy

Move sharing locks above altp2m to avoid locking order violation and crashing
the hypervisor during unsharing operations when altp2m is active.

Applying mem_access settings or remapping gfns in altp2m views will
automatically unshare the page if it was shared previously. Also,
disallow nominating pages for which there are pre-existing altp2m
mem_access settings or remappings present. However, allow altp2m to
populate altp2m views with shared entries during lazycopy as unsharing
will automatically propagate the change to these entries in altp2m
views as well.

While we're here, switch to using the appropriate wrappers rather than
calling p2m->get_entry() directly.
---

That separates out the description of the important algorithmic change
of the patch from the purely cosmetic change of using the get_entry()
wrappers; hopefully that should make both easier to grok.

(I'll check it in as above if I don't hear any objections by EOD today.)

 -George
George Dunlap July 26, 2016, 10:25 a.m. UTC | #2
On Tue, Jul 26, 2016 at 10:35 AM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> On Mon, Jul 25, 2016 at 7:33 PM, Tamas K Lengyel
> <tamas.lengyel@zentific.com> wrote:
>> Move sharing locks above altp2m to avoid locking order violation and crashing
>> the hypervisor during unsharing operations when altp2m is active.
>>
>> Applying mem_access settings or remapping gfns in altp2m views will
>> automatically unshare the page if it was shared previously and for this we use
>> get_entry() wrappers to properly initiate unsharing. Also, disallow nominating
>> pages for which there are pre-existing altp2m mem_access settings or remappings
>> present. However, allow altp2m to populate altp2m views with shared entries
>> during lazycopy as unsharing will automatically propagate the change to these
>> entries in altp2m views as well.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>
> If you're OK with it, I hope you don't mind if I change the changelog
> text to the following:
>
> ---
> altp2m: Allow shared entries to be copied to altp2m views during lazycopy
>
> Move sharing locks above altp2m to avoid locking order violation and crashing
> the hypervisor during unsharing operations when altp2m is active.
>
> Applying mem_access settings or remapping gfns in altp2m views will
> automatically unshare the page if it was shared previously. Also,
> disallow nominating pages for which there are pre-existing altp2m
> mem_access settings or remappings present. However, allow altp2m to
> populate altp2m views with shared entries during lazycopy as unsharing
> will automatically propagate the change to these entries in altp2m
> views as well.
>
> While we're here, switch to using the appropriate wrappers rather than
> calling p2m->get_entry() directly.
> ---
>
> That separates out the description of the important algorithmic change
> of the patch from the purely cosmetic change of using the get_entry()
> wrappers; hopefully that should make both easier to grok.
>
> (I'll check it in as above if I don't hear any objections by EOD today.)

Sorry, suddenly remembered that you're actually *not* in a European
time zone (IIRC); I'll wait until tomorrow morning.

 -George
Tamas Lengyel July 26, 2016, 3:16 p.m. UTC | #3
On Tue, Jul 26, 2016 at 4:25 AM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> On Tue, Jul 26, 2016 at 10:35 AM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>> On Mon, Jul 25, 2016 at 7:33 PM, Tamas K Lengyel
>> <tamas.lengyel@zentific.com> wrote:
>>> Move sharing locks above altp2m to avoid locking order violation and crashing
>>> the hypervisor during unsharing operations when altp2m is active.
>>>
>>> Applying mem_access settings or remapping gfns in altp2m views will
>>> automatically unshare the page if it was shared previously and for this we use
>>> get_entry() wrappers to properly initiate unsharing. Also, disallow nominating
>>> pages for which there are pre-existing altp2m mem_access settings or remappings
>>> present. However, allow altp2m to populate altp2m views with shared entries
>>> during lazycopy as unsharing will automatically propagate the change to these
>>> entries in altp2m views as well.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>
>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>
>> If you're OK with it, I hope you don't mind if I change the changelog
>> text to the following:
>>
>> ---
>> altp2m: Allow shared entries to be copied to altp2m views during lazycopy
>>
>> Move sharing locks above altp2m to avoid locking order violation and crashing
>> the hypervisor during unsharing operations when altp2m is active.
>>
>> Applying mem_access settings or remapping gfns in altp2m views will
>> automatically unshare the page if it was shared previously. Also,
>> disallow nominating pages for which there are pre-existing altp2m
>> mem_access settings or remappings present. However, allow altp2m to
>> populate altp2m views with shared entries during lazycopy as unsharing
>> will automatically propagate the change to these entries in altp2m
>> views as well.
>>
>> While we're here, switch to using the appropriate wrappers rather than
>> calling p2m->get_entry() directly.
>> ---
>>
>> That separates out the description of the important algorithmic change
>> of the patch from the purely cosmetic change of using the get_entry()
>> wrappers; hopefully that should make both easier to grok.
>>
>> (I'll check it in as above if I don't hear any objections by EOD today.)

No problem, that's fine.

>
> Sorry, suddenly remembered that you're actually *not* in a European
> time zone (IIRC); I'll wait until tomorrow morning.
>

Correct, I'm in a -8h time zone compared to most of Europe.

Thanks,
Tamas
diff mbox

Patch

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index a522423..47e0820 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -33,6 +33,7 @@ 
 #include <asm/page.h>
 #include <asm/string.h>
 #include <asm/p2m.h>
+#include <asm/altp2m.h>
 #include <asm/atomic.h>
 #include <asm/event.h>
 #include <xsm/xsm.h>
@@ -828,14 +829,16 @@  int mem_sharing_nominate_page(struct domain *d,
                               int expected_refcnt,
                               shr_handle_t *phandle)
 {
+    struct p2m_domain *hp2m = p2m_get_hostp2m(d);
     p2m_type_t p2mt;
+    p2m_access_t p2ma;
     mfn_t mfn;
     struct page_info *page = NULL; /* gcc... */
     int ret;
 
     *phandle = 0UL;
 
-    mfn = get_gfn(d, gfn, &p2mt);
+    mfn = get_gfn_type_access(hp2m, gfn, &p2mt, &p2ma, 0, NULL);
 
     /* Check if mfn is valid */
     ret = -EINVAL;
@@ -861,6 +864,33 @@  int mem_sharing_nominate_page(struct domain *d,
     if ( !p2m_is_sharable(p2mt) )
         goto out;
 
+    /* Check if there are mem_access/remapped altp2m entries for this page */
+    if ( altp2m_active(d) )
+    {
+        unsigned int i;
+        struct p2m_domain *ap2m;
+        mfn_t amfn;
+        p2m_access_t ap2ma;
+
+        altp2m_list_lock(d);
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+        {
+            ap2m = d->arch.altp2m_p2m[i];
+            if ( !ap2m )
+                continue;
+
+            amfn = get_gfn_type_access(ap2m, gfn, NULL, &ap2ma, 0, NULL);
+            if ( mfn_valid(amfn) && (mfn_x(amfn) != mfn_x(mfn) || ap2ma != p2ma) )
+            {
+                altp2m_list_unlock(d);
+                goto out;
+            }
+        }
+
+        altp2m_list_unlock(d);
+    }
+
     /* Try to convert the mfn to the sharable type */
     page = mfn_to_page(mfn);
     ret = page_make_sharable(d, page, expected_refcnt); 
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 086c8bb..74fdfc1 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -242,6 +242,21 @@  declare_mm_lock(nestedp2m)
 
 declare_mm_rwlock(p2m);
 
+/* Sharing per page lock
+ *
+ * This is an external lock, not represented by an mm_lock_t. The memory
+ * sharing lock uses it to protect addition and removal of (gfn,domain)
+ * tuples to a shared page. We enforce order here against the p2m lock,
+ * which is taken after the page_lock to change the gfn's p2m entry.
+ *
+ * The lock is recursive because during share we lock two pages. */
+
+declare_mm_order_constraint(per_page_sharing)
+#define page_sharing_mm_pre_lock()   mm_enforce_order_lock_pre_per_page_sharing()
+#define page_sharing_mm_post_lock(l, r) \
+        mm_enforce_order_lock_post_per_page_sharing((l), (r))
+#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r))
+
 /* Alternate P2M list lock (per-domain)
  *
  * A per-domain lock that protects the list of alternate p2m's.
@@ -287,21 +302,6 @@  declare_mm_rwlock(altp2m);
 #define p2m_locked_by_me(p)   mm_write_locked_by_me(&(p)->lock)
 #define gfn_locked_by_me(p,g) p2m_locked_by_me(p)
 
-/* Sharing per page lock
- *
- * This is an external lock, not represented by an mm_lock_t. The memory
- * sharing lock uses it to protect addition and removal of (gfn,domain)
- * tuples to a shared page. We enforce order here against the p2m lock,
- * which is taken after the page_lock to change the gfn's p2m entry.
- *
- * The lock is recursive because during share we lock two pages. */
-
-declare_mm_order_constraint(per_page_sharing)
-#define page_sharing_mm_pre_lock()   mm_enforce_order_lock_pre_per_page_sharing()
-#define page_sharing_mm_post_lock(l, r) \
-        mm_enforce_order_lock_post_per_page_sharing((l), (r))
-#define page_sharing_mm_unlock(l, r) mm_enforce_order_unlock((l), (r))
-
 /* PoD lock (per-p2m-table)
  * 
  * Protects private PoD data structs: entry and cache
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index ff0cce8..812dbf6 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1786,8 +1786,9 @@  int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
     /* Check host p2m if no valid entry in alternate */
     if ( !mfn_valid(mfn) )
     {
-        mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a,
-                              P2M_ALLOC | P2M_UNSHARE, &page_order, NULL);
+
+        mfn = get_gfn_type_access(hp2m, gfn_l, &t, &old_a,
+                                  P2M_ALLOC | P2M_UNSHARE, &page_order);
 
         rc = -ESRCH;
         if ( !mfn_valid(mfn) || t != p2m_ram_rw )
@@ -2363,7 +2364,7 @@  bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
         return 0;
 
     mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma,
-                              P2M_ALLOC | P2M_UNSHARE, &page_order);
+                              P2M_ALLOC, &page_order);
     __put_gfn(hp2m, gfn_x(gfn));
 
     if ( mfn_eq(mfn, INVALID_MFN) )
@@ -2562,8 +2563,8 @@  int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     /* Check host p2m if no valid entry in alternate */
     if ( !mfn_valid(mfn) )
     {
-        mfn = hp2m->get_entry(hp2m, gfn_x(old_gfn), &t, &a,
-                              P2M_ALLOC | P2M_UNSHARE, &page_order, NULL);
+        mfn = get_gfn_type_access(hp2m, gfn_x(old_gfn), &t, &a,
+                                  P2M_ALLOC | P2M_UNSHARE, &page_order);
 
         if ( !mfn_valid(mfn) || t != p2m_ram_rw )
             goto out;
@@ -2588,6 +2589,7 @@  int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     if ( !mfn_valid(mfn) )
         mfn = hp2m->get_entry(hp2m, gfn_x(new_gfn), &t, &a, 0, NULL, NULL);
 
+    /* Note: currently it is not safe to remap to a shared entry */
     if ( !mfn_valid(mfn) || (t != p2m_ram_rw) )
         goto out;