diff mbox series

[v3,1/2] nvmx: implement support for MSR bitmaps

Message ID 20200203173728.18135-2-roger.pau@citrix.com (mailing list archive)
State Superseded
Headers show
Series nvmx: implement support for MSR bitmaps | expand

Commit Message

Roger Pau Monné Feb. 3, 2020, 5:37 p.m. UTC
Current implementation of nested VMX has a half baked handling of MSR
bitmaps for the L1 VMM: it maps the L1 VMM provided MSR bitmap, but
doesn't actually load it into the nested vmcs, and thus the nested
guest vmcs ends up using the same MSR bitmap as the L1 VMM.

This is wrong as there's no assurance that the set of features enabled
for the L1 vmcs are the same that L1 itself is going to use in the
nested vmcs, and thus can lead to misconfigurations.

For example L1 vmcs can use x2APIC virtualization and virtual
interrupt delivery, and thus some x2APIC MSRs won't be trapped so that
they can be handled directly by the hardware using virtualization
extensions. On the other hand, the nested vmcs created by L1 VMM might
not use any of such features, so using a MSR bitmap that doesn't trap
accesses to the x2APIC MSRs will be leaking them to the underlying
hardware.

Fix this by crafting a merged MSR bitmap between the one used by L1
and the nested guest.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Pass shadow_ctrl into update_msrbitmap, and check there if
   CPU_BASED_ACTIVATE_MSR_BITMAP is set.
 - Do not enable MSR bitmap unless it's enabled in both L1 and L2.
 - Rename L1 guest to L2 in nestedvmx struct comment.

Changes since v1:
 - Split the x2APIC MSR fix into a separate patch.
 - Move setting MSR_BITMAP vmcs field into load_vvmcs_host_state for
   virtual vmexit.
 - Allocate memory with MEMF_no_owner.
 - Use tabs to align comment of the nestedvmx struct field.
---
 xen/arch/x86/hvm/vmx/vvmx.c        | 62 ++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/vmx/vvmx.h |  3 +-
 2 files changed, 61 insertions(+), 4 deletions(-)

Comments

Tian, Kevin Feb. 4, 2020, 1:46 a.m. UTC | #1
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: Tuesday, February 4, 2020 1:37 AM
> 
> Current implementation of nested VMX has a half baked handling of MSR
> bitmaps for the L1 VMM: it maps the L1 VMM provided MSR bitmap, but
> doesn't actually load it into the nested vmcs, and thus the nested
> guest vmcs ends up using the same MSR bitmap as the L1 VMM.
> 
> This is wrong as there's no assurance that the set of features enabled
> for the L1 vmcs are the same that L1 itself is going to use in the
> nested vmcs, and thus can lead to misconfigurations.
> 
> For example L1 vmcs can use x2APIC virtualization and virtual
> interrupt delivery, and thus some x2APIC MSRs won't be trapped so that
> they can be handled directly by the hardware using virtualization
> extensions. On the other hand, the nested vmcs created by L1 VMM might
> not use any of such features, so using a MSR bitmap that doesn't trap
> accesses to the x2APIC MSRs will be leaking them to the underlying
> hardware.
> 
> Fix this by crafting a merged MSR bitmap between the one used by L1
> and the nested guest.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich Feb. 4, 2020, 9:32 a.m. UTC | #2
On 03.02.2020 18:37, Roger Pau Monne wrote:
> @@ -182,6 +192,11 @@ void nvmx_vcpu_destroy(struct vcpu *v)
>          free_domheap_page(v->arch.hvm.vmx.vmwrite_bitmap);
>          v->arch.hvm.vmx.vmwrite_bitmap = NULL;
>      }
> +    if ( nvmx->msr_merged )
> +    {
> +        free_domheap_page(nvmx->msr_merged);
> +        nvmx->msr_merged = NULL;
> +    }

Can this not be done ...

>  }
>   
>  void nvmx_domain_relinquish_resources(struct domain *d)

... in this function, thus happening earlier upon domain
cleanup, and leaving less resources allocated in case a domain
ends up as zombie (due to another bug elsewhere)? Actually -
aren't you extending an existing bug this way? When
nestedhvm_vcpu_initialise() fails, nestedhvm_vcpu_destroy()
won't be called afaict. Hence nvmx_vcpu_initialise() not
cleaning up after itself in case of failure looks to be a
memory leak. As of b3344bb1cae0 any such will be taken care
of implicitly as long as the freeing happens on the
relinquish-resources paths.

Jan
Roger Pau Monné Feb. 4, 2020, 11:13 a.m. UTC | #3
On Tue, Feb 04, 2020 at 10:32:47AM +0100, Jan Beulich wrote:
> On 03.02.2020 18:37, Roger Pau Monne wrote:
> > @@ -182,6 +192,11 @@ void nvmx_vcpu_destroy(struct vcpu *v)
> >          free_domheap_page(v->arch.hvm.vmx.vmwrite_bitmap);
> >          v->arch.hvm.vmx.vmwrite_bitmap = NULL;
> >      }
> > +    if ( nvmx->msr_merged )
> > +    {
> > +        free_domheap_page(nvmx->msr_merged);
> > +        nvmx->msr_merged = NULL;
> > +    }
> 
> Can this not be done ...
> 
> >  }
> >   
> >  void nvmx_domain_relinquish_resources(struct domain *d)
> 
> ... in this function, thus happening earlier upon domain
> cleanup, and leaving less resources allocated in case a domain
> ends up as zombie (due to another bug elsewhere)? Actually -
> aren't you extending an existing bug this way? When
> nestedhvm_vcpu_initialise() fails, nestedhvm_vcpu_destroy()
> won't be called afaict.

nestedhvm_vcpu_destroy will be called by hvm_vcpu_initialise (the
caller of nestedhvm_vcpu_initialise) AFAICT.

> Hence nvmx_vcpu_initialise() not
> cleaning up after itself in case of failure looks to be a
> memory leak. As of b3344bb1cae0 any such will be taken care
> of implicitly as long as the freeing happens on the
> relinquish-resources paths.

I can move the new addition to nvmx_domain_relinquish_resources, I've
originally added it to nvmx_vcpu_destroy because that's where other
pages are also freed.

Thanks, Roger.
Jan Beulich Feb. 4, 2020, 11:24 a.m. UTC | #4
On 04.02.2020 12:13, Roger Pau Monné wrote:
> On Tue, Feb 04, 2020 at 10:32:47AM +0100, Jan Beulich wrote:
>> On 03.02.2020 18:37, Roger Pau Monne wrote:
>>> @@ -182,6 +192,11 @@ void nvmx_vcpu_destroy(struct vcpu *v)
>>>          free_domheap_page(v->arch.hvm.vmx.vmwrite_bitmap);
>>>          v->arch.hvm.vmx.vmwrite_bitmap = NULL;
>>>      }
>>> +    if ( nvmx->msr_merged )
>>> +    {
>>> +        free_domheap_page(nvmx->msr_merged);
>>> +        nvmx->msr_merged = NULL;
>>> +    }
>>
>> Can this not be done ...
>>
>>>  }
>>>   
>>>  void nvmx_domain_relinquish_resources(struct domain *d)
>>
>> ... in this function, thus happening earlier upon domain
>> cleanup, and leaving less resources allocated in case a domain
>> ends up as zombie (due to another bug elsewhere)? Actually -
>> aren't you extending an existing bug this way? When
>> nestedhvm_vcpu_initialise() fails, nestedhvm_vcpu_destroy()
>> won't be called afaict.
> 
> nestedhvm_vcpu_destroy will be called by hvm_vcpu_initialise (the
> caller of nestedhvm_vcpu_initialise) AFAICT.

Unless nestedhvm_vcpu_initialise() itself fails:

    if ( nestedhvm_enabled(d)
         && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
        goto fail5;
...
 fail6:
    nestedhvm_vcpu_destroy(v);
 fail5:
    free_compat_arg_xlat(v);

The common issue of these functions not being idempotent.

>> Hence nvmx_vcpu_initialise() not
>> cleaning up after itself in case of failure looks to be a
>> memory leak. As of b3344bb1cae0 any such will be taken care
>> of implicitly as long as the freeing happens on the
>> relinquish-resources paths.
> 
> I can move the new addition to nvmx_domain_relinquish_resources, I've
> originally added it to nvmx_vcpu_destroy because that's where other
> pages are also freed.

Right, hence me mentioning a pre-existing issue that you're
widening a little the way things are currently done.

Jan
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 47eee1e5b9..f118f88683 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -128,6 +128,16 @@  int nvmx_vcpu_initialise(struct vcpu *v)
         unmap_domain_page(vw);
     }
 
+    if ( cpu_has_vmx_msr_bitmap )
+    {
+        nvmx->msr_merged = alloc_domheap_page(d, MEMF_no_owner);
+        if ( !nvmx->msr_merged )
+        {
+            gdprintk(XENLOG_ERR, "nest: allocation for MSR bitmap failed\n");
+            return -ENOMEM;
+        }
+    }
+
     nvmx->ept.enabled = 0;
     nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = INVALID_PADDR;
@@ -182,6 +192,11 @@  void nvmx_vcpu_destroy(struct vcpu *v)
         free_domheap_page(v->arch.hvm.vmx.vmwrite_bitmap);
         v->arch.hvm.vmx.vmwrite_bitmap = NULL;
     }
+    if ( nvmx->msr_merged )
+    {
+        free_domheap_page(nvmx->msr_merged);
+        nvmx->msr_merged = NULL;
+    }
 }
  
 void nvmx_domain_relinquish_resources(struct domain *d)
@@ -548,6 +563,35 @@  unsigned long *_shadow_io_bitmap(struct vcpu *v)
     return nestedhvm_vcpu_iomap_get(port80, portED);
 }
 
+static void update_msrbitmap(struct vcpu *v, uint32_t shadow_ctrl)
+{
+    struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+    struct vmx_msr_bitmap *msr_bitmap;
+
+    if ( !(shadow_ctrl & CPU_BASED_ACTIVATE_MSR_BITMAP) ||
+         !nvmx->msrbitmap )
+       return;
+
+    msr_bitmap = __map_domain_page(nvmx->msr_merged);
+
+    bitmap_or(msr_bitmap->read_low, nvmx->msrbitmap->read_low,
+              v->arch.hvm.vmx.msr_bitmap->read_low,
+              sizeof(msr_bitmap->read_low) * 8);
+    bitmap_or(msr_bitmap->read_high, nvmx->msrbitmap->read_high,
+              v->arch.hvm.vmx.msr_bitmap->read_high,
+              sizeof(msr_bitmap->read_high) * 8);
+    bitmap_or(msr_bitmap->write_low, nvmx->msrbitmap->write_low,
+              v->arch.hvm.vmx.msr_bitmap->write_low,
+              sizeof(msr_bitmap->write_low) * 8);
+    bitmap_or(msr_bitmap->write_high, nvmx->msrbitmap->write_high,
+              v->arch.hvm.vmx.msr_bitmap->write_high,
+              sizeof(msr_bitmap->write_high) * 8);
+
+    unmap_domain_page(msr_bitmap);
+
+    __vmwrite(MSR_BITMAP, page_to_maddr(nvmx->msr_merged));
+}
+
 void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
 {
     u32 pio_cntrl = (CPU_BASED_ACTIVATE_IO_BITMAP
@@ -558,10 +602,17 @@  void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
     shadow_cntrl = __n2_exec_control(v);
     pio_cntrl &= shadow_cntrl;
     /* Enforce the removed features */
-    shadow_cntrl &= ~(CPU_BASED_ACTIVATE_MSR_BITMAP
-                      | CPU_BASED_ACTIVATE_IO_BITMAP
+    shadow_cntrl &= ~(CPU_BASED_ACTIVATE_IO_BITMAP
                       | CPU_BASED_UNCOND_IO_EXITING);
-    shadow_cntrl |= host_cntrl;
+    /*
+     * Do NOT enforce the MSR bitmap currently used by L1, as certain hardware
+     * virtualization features require specific MSR bitmap settings, but
+     * without the guest also using these same features the bitmap could be
+     * leaking through unwanted MSR accesses.
+     */
+    shadow_cntrl |= host_cntrl & ~CPU_BASED_ACTIVATE_MSR_BITMAP;
+    if ( !(shadow_cntrl & host_cntrl & CPU_BASED_ACTIVATE_MSR_BITMAP) )
+      shadow_cntrl &= ~CPU_BASED_ACTIVATE_MSR_BITMAP;
     if ( pio_cntrl == CPU_BASED_UNCOND_IO_EXITING ) {
         /* L1 VMM intercepts all I/O instructions */
         shadow_cntrl |= CPU_BASED_UNCOND_IO_EXITING;
@@ -584,6 +635,8 @@  void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl)
         __vmwrite(IO_BITMAP_B, virt_to_maddr(bitmap) + PAGE_SIZE);
     }
 
+    update_msrbitmap(v, shadow_cntrl);
+
     /* TODO: change L0 intr window to MTF or NMI window */
     __vmwrite(CPU_BASED_VM_EXEC_CONTROL, shadow_cntrl);
 }
@@ -1278,6 +1331,9 @@  static void load_vvmcs_host_state(struct vcpu *v)
     hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset, 0);
 
     set_vvmcs(v, VM_ENTRY_INTR_INFO, 0);
+
+    if ( v->arch.hvm.vmx.exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP )
+        __vmwrite(MSR_BITMAP, virt_to_maddr(v->arch.hvm.vmx.msr_bitmap));
 }
 
 static void sync_exception_state(struct vcpu *v)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 6b9c4ae0b2..c41f089939 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -37,7 +37,8 @@  struct nestedvmx {
      */
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
-    void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
+    struct vmx_msr_bitmap *msrbitmap;	/* map (va) of L1 guest MSR bitmap */
+    struct page_info *msr_merged;	/* merged L1 and L2 MSR bitmap */
     /* deferred nested interrupt */
     struct {
         unsigned long intr_info;