diff mbox series

[1/2] x86/shadow: slightly consolidate sh_unshadow_for_p2m_change()

Message ID 521b39ce-2c2e-967e-ecc7-f66281aee562@suse.com (mailing list archive)
State New, archived
Headers show
Series x86/P2M: allow 2M superpage use for shadowed guests | expand

Commit Message

Jan Beulich Dec. 9, 2021, 11:26 a.m. UTC
In preparation for reactivating the presently dead 2M page path of the
function,
- also deal with the case of replacing an L1 page table all in one go,
- pull common checks out of the switch(). This includes extending a
  _PAGE_PRESENT check to L1 as well, which presumably was deemed
  redundant with p2m_is_valid() || p2m_is_grant(), but I think we are
  better off being explicit in all cases,
- replace a p2m_is_ram() check in the 2M case by an explicit
  _PAGE_PRESENT one, to make more obvious that the subsequent
  l1e_get_mfn() actually retrieves something that is actually an MFN.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Comments

George Dunlap June 24, 2022, 7:16 p.m. UTC | #1
> On 9 Dec 2021, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
> 
> In preparation for reactivating the presently dead 2M page path of the
> function,
> - also deal with the case of replacing an L1 page table all in one go,
> - pull common checks out of the switch(). This includes extending a
>  _PAGE_PRESENT check to L1 as well, which presumably was deemed
>  redundant with p2m_is_valid() || p2m_is_grant(), but I think we are
>  better off being explicit in all cases,
> - replace a p2m_is_ram() check in the 2M case by an explicit
>  _PAGE_PRESENT one, to make more obvious that the subsequent
>  l1e_get_mfn() actually retrieves something that is actually an MFN.

Each of these changes requires careful checking to make sure there aren’t any bugs introduced.  I’d feel much more comfortable giving an R-b of they were broken out into separate patches.

 -George
Jan Beulich June 27, 2022, 6:26 a.m. UTC | #2
On 24.06.2022 21:16, George Dunlap wrote:
> 
> 
>> On 9 Dec 2021, at 11:26, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> In preparation for reactivating the presently dead 2M page path of the
>> function,
>> - also deal with the case of replacing an L1 page table all in one go,
>> - pull common checks out of the switch(). This includes extending a
>>  _PAGE_PRESENT check to L1 as well, which presumably was deemed
>>  redundant with p2m_is_valid() || p2m_is_grant(), but I think we are
>>  better off being explicit in all cases,
>> - replace a p2m_is_ram() check in the 2M case by an explicit
>>  _PAGE_PRESENT one, to make more obvious that the subsequent
>>  l1e_get_mfn() actually retrieves something that is actually an MFN.
> 
> Each of these changes requires careful checking to make sure there aren’t any bugs introduced.  I’d feel much more comfortable giving an R-b of they were broken out into separate patches.

I'll see what I can do. It has been quite some time, but iirc trying
to do things separately didn't work out very well.

Jan
diff mbox series

Patch

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -801,7 +801,7 @@  static void sh_unshadow_for_p2m_change(s
                                        l1_pgentry_t old, l1_pgentry_t new,
                                        unsigned int level)
 {
-    mfn_t omfn = l1e_get_mfn(old);
+    mfn_t omfn = l1e_get_mfn(old), nmfn;
     unsigned int oflags = l1e_get_flags(old);
     p2m_type_t p2mt = p2m_flags_to_type(oflags);
     bool flush = false;
@@ -813,19 +813,30 @@  static void sh_unshadow_for_p2m_change(s
     if ( unlikely(!d->arch.paging.shadow.total_pages) )
         return;
 
+    /* Only previously present / valid entries need processing. */
+    if ( !(oflags & _PAGE_PRESENT) ||
+         (!p2m_is_valid(p2mt) && !p2m_is_grant(p2mt)) ||
+         !mfn_valid(omfn) )
+        return;
+
+    nmfn = l1e_get_flags(new) & _PAGE_PRESENT ? l1e_get_mfn(new) : INVALID_MFN;
+
     switch ( level )
     {
     default:
         /*
          * The following assertion is to make sure we don't step on 1GB host
-         * page support of HVM guest.
+         * page support of HVM guest. Plus we rely on ->set_entry() to never
+         * get called with orders above PAGE_ORDER_2M, not even to install
+         * non-present entries (which in principle ought to be fine even
+         * without respective large page support).
          */
-        ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
+        ASSERT_UNREACHABLE();
         break;
 
     /* If we're removing an MFN from the p2m, remove it from the shadows too */
     case 1:
-        if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) )
+        if ( !mfn_eq(nmfn, omfn) )
         {
             sh_remove_all_shadows_and_parents(d, omfn);
             if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) )
@@ -839,14 +850,9 @@  static void sh_unshadow_for_p2m_change(s
      * scheme, that's OK, but otherwise they must be unshadowed.
      */
     case 2:
-        if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) )
-            break;
-
-        if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
         {
             unsigned int i;
-            mfn_t nmfn = l1e_get_mfn(new);
-            l1_pgentry_t *npte = NULL;
+            l1_pgentry_t *npte = NULL, *opte = NULL;
 
             /* If we're replacing a superpage with a normal L1 page, map it */
             if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
@@ -854,24 +860,39 @@  static void sh_unshadow_for_p2m_change(s
                  mfn_valid(nmfn) )
                 npte = map_domain_page(nmfn);
 
+            /* If we're replacing a normal L1 page, map it as well. */
+            if ( !(oflags & _PAGE_PSE) )
+                opte = map_domain_page(omfn);
+
             gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
 
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
             {
-                if ( !npte ||
-                     !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) ||
-                     !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
+                if ( opte )
+                {
+                    if ( !(l1e_get_flags(opte[i]) & _PAGE_PRESENT) )
+                        continue;
+                    omfn = l1e_get_mfn(opte[i]);
+                }
+
+                if ( npte )
+                    nmfn = l1e_get_flags(npte[i]) & _PAGE_PRESENT
+                           ? l1e_get_mfn(npte[i]) : INVALID_MFN;
+
+                if ( !mfn_eq(nmfn, omfn) )
                 {
                     /* This GFN->MFN mapping has gone away */
                     sh_remove_all_shadows_and_parents(d, omfn);
                     if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
                         flush = true;
                 }
+
                 omfn = mfn_add(omfn, 1);
+                nmfn = mfn_add(nmfn, 1);
             }
 
-            if ( npte )
-                unmap_domain_page(npte);
+            unmap_domain_page(opte);
+            unmap_domain_page(npte);
         }
 
         break;