From patchwork Wed Jan 8 12:31:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11323413 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAEAF930 for ; Wed, 8 Jan 2020 12:32:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A6D2B206DB for ; Wed, 8 Jan 2020 12:32:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="DrYL5avl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6D2B206DB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipAV2-0003Os-6N; Wed, 08 Jan 2020 12:31:48 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipAV0-0003Om-Up for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 12:31:46 +0000 X-Inumbo-ID: d51f6714-3212-11ea-b7d9-12813bfff9fa Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d51f6714-3212-11ea-b7d9-12813bfff9fa; Wed, 08 Jan 2020 12:31:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1578486706; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=FW6YAIQ9gCuCkNbgZBeP1VL3A2UE+amWzXpTTp6OWn0=; b=DrYL5avlJsECfQjbuS+faCaiOqXq6gdLFqMLec3CqJMvTBMNNE/XmtKE JZj/NW2rytgX+4SDhmZCm0vY5MgcsjAVuOjTpjs4YgQJ5MecsOYPrYQvI XFhouHZRClKg0m/ajpOhFNel5KqiCcbjMXUe30X4pHzu6yBe8nPWsS9PG c=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: eKCfj0Qq3L9dh71U9Urn3C0KbkkK3CoHkV1CCBpzxUmaunW2zoV9Cd+KPcaD+sAX01Zbx5UUUA F+lyYWNpd5gW63YvLjfoESnYPgwlom4ULI1ikHFJ5GA/sWQ3BEKR7RWwJ8hyvnmiiEB56Xm/LN g9y0zt4beQwh0OD21bkLHLwkxZQ0Ahfg6HEP5c+DFEVL4z0E6+igxzDRB8soi5ClvXUR01QVOY Cx4l0xLNtpC3FJ6EN6zpjBw7lw4IDU/Slf9TAaQNSQgUpafhKUkClVmZRfCcRSNN9Xkmjvxwfe eS4= X-SBRS: 2.7 X-MesageID: 10607089 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,409,1571716800"; d="scan'208";a="10607089" From: Roger Pau Monne To: Date: Wed, 8 Jan 2020 13:31:40 +0100 Message-ID: <20200108123140.77999-1-roger.pau@citrix.com> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH] nvmx: implement support for MSR bitmaps X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Jun Nakajima , Wei Liu , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Current implementation of nested VMX has a half baked handling of MSR bitmaps for the L1 VMM: it maps the L1 VMM provided MSR bitmap, but doesn't actually load it into the nested vmcs, and thus the nested guest vmcs ends up using the same MSR bitmap as the L1 VMM. This is wrong as there's no assurance that the set of features enabled for the L1 vmcs are the same that L1 itself is going to use in the nested vmcs, and thus can lead to misconfigurations. For example L1 vmcs can use x2APIC virtualization and virtual interrupt delivery, and thus some x2APIC MSRs won't be trapped so that they can be handled directly by the hardware using virtualization extensions. On the other hand, the nested vmcs created by L1 VMM might not use any of such features, so using a MSR bitmap that doesn't trap accesses to the x2APIC MSRs will be leaking them to the underlying hardware. Fix this by crafting a merged MSR bitmap between the one used by L1 and the nested guest, and make sure a nested vmcs MSR bitmap always traps accesses to the x2APIC MSR range, since hardware assisted x2APIC virtualization or virtual interrupt delivery are never available to L1 VMM. Signed-off-by: Roger Pau Monné --- This seems better than what's done currently, but TBH there's a lot of work to be done in nvmx in order to make it functional and secure that I'm not sure whether building on top of the current implementation is something sane to do, or it would be better to start from scratch and re-implement nvmx to just support the minimum required set of VTx features in a sane and safe way. --- xen/arch/x86/hvm/vmx/vvmx.c | 76 ++++++++++++++++++++++++++++-- xen/include/asm-x86/hvm/vmx/vvmx.h | 3 +- 2 files changed, 75 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index af48b0beef..ad81de051b 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -128,6 +128,16 @@ int nvmx_vcpu_initialise(struct vcpu *v) unmap_domain_page(vw); } + if ( cpu_has_vmx_msr_bitmap ) + { + nvmx->msr_merged = alloc_domheap_page(NULL, 0); + if ( !nvmx->msr_merged ) + { + gdprintk(XENLOG_ERR, "nest: allocation for MSR bitmap failed\n"); + return -ENOMEM; + } + } + nvmx->ept.enabled = 0; nvmx->guest_vpid = 0; nvmx->vmxon_region_pa = INVALID_PADDR; @@ -182,6 +192,11 @@ void nvmx_vcpu_destroy(struct vcpu *v) free_domheap_page(v->arch.hvm.vmx.vmwrite_bitmap); v->arch.hvm.vmx.vmwrite_bitmap = NULL; } + if ( nvmx->msr_merged ) + { + free_domheap_page(nvmx->msr_merged); + nvmx->msr_merged = NULL; + } } void nvmx_domain_relinquish_resources(struct domain *d) @@ -548,6 +563,50 @@ unsigned long *_shadow_io_bitmap(struct vcpu *v) return nestedhvm_vcpu_iomap_get(port80, portED); } +static void update_msrbitmap(struct vcpu *v) +{ + struct nestedvmx *nvmx = &vcpu_2_nvmx(v); + struct vmx_msr_bitmap *msr_bitmap; + unsigned int msr; + + ASSERT(__n2_exec_control(v) & CPU_BASED_ACTIVATE_MSR_BITMAP); + + if ( !nvmx->msrbitmap ) + return; + + msr_bitmap = __map_domain_page(nvmx->msr_merged); + + bitmap_or(msr_bitmap->read_low, nvmx->msrbitmap->read_low, + v->arch.hvm.vmx.msr_bitmap->read_low, + sizeof(msr_bitmap->read_low) * 8); + bitmap_or(msr_bitmap->read_high, nvmx->msrbitmap->read_high, + v->arch.hvm.vmx.msr_bitmap->read_high, + sizeof(msr_bitmap->read_high) * 8); + bitmap_or(msr_bitmap->write_low, nvmx->msrbitmap->write_low, + v->arch.hvm.vmx.msr_bitmap->write_low, + sizeof(msr_bitmap->write_low) * 8); + bitmap_or(msr_bitmap->write_high, nvmx->msrbitmap->write_high, + v->arch.hvm.vmx.msr_bitmap->write_high, + sizeof(msr_bitmap->write_high) * 8); + + /* + * Nested VMX doesn't support any x2APIC hardware virtualization, so + * make sure all the x2APIC MSRs are trapped. + */ + ASSERT(!(__n2_secondary_exec_control(v) & + (SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY)) ); + for ( msr = MSR_X2APIC_FIRST; msr <= MSR_X2APIC_FIRST + 0xff; msr++ ) + { + set_bit(msr, msr_bitmap->read_low); + set_bit(msr, msr_bitmap->write_low); + } + + unmap_domain_page(msr_bitmap); + + __vmwrite(MSR_BITMAP, page_to_maddr(nvmx->msr_merged)); +} + void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl) { u32 pio_cntrl = (CPU_BASED_ACTIVATE_IO_BITMAP @@ -558,10 +617,15 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl) shadow_cntrl = __n2_exec_control(v); pio_cntrl &= shadow_cntrl; /* Enforce the removed features */ - shadow_cntrl &= ~(CPU_BASED_ACTIVATE_MSR_BITMAP - | CPU_BASED_ACTIVATE_IO_BITMAP + shadow_cntrl &= ~(CPU_BASED_ACTIVATE_IO_BITMAP | CPU_BASED_UNCOND_IO_EXITING); - shadow_cntrl |= host_cntrl; + /* + * Do NOT enforce the MSR bitmap currently used by L1, as certain hardware + * virtualization features require specific MSR bitmap settings, but without + * using such features the bitmap could be leaking through unwanted MSR + * accesses. + */ + shadow_cntrl |= (host_cntrl & ~CPU_BASED_ACTIVATE_MSR_BITMAP); if ( pio_cntrl == CPU_BASED_UNCOND_IO_EXITING ) { /* L1 VMM intercepts all I/O instructions */ shadow_cntrl |= CPU_BASED_UNCOND_IO_EXITING; @@ -584,6 +648,9 @@ void nvmx_update_exec_control(struct vcpu *v, u32 host_cntrl) __vmwrite(IO_BITMAP_B, virt_to_maddr(bitmap) + PAGE_SIZE); } + if ( shadow_cntrl & CPU_BASED_ACTIVATE_MSR_BITMAP ) + update_msrbitmap(v); + /* TODO: change L0 intr window to MTF or NMI window */ __vmwrite(CPU_BASED_VM_EXEC_CONTROL, shadow_cntrl); } @@ -1351,6 +1418,9 @@ static void virtual_vmexit(struct cpu_user_regs *regs) vmx_update_secondary_exec_control(v); vmx_update_exception_bitmap(v); + if ( v->arch.hvm.vmx.exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP ) + __vmwrite(MSR_BITMAP, virt_to_maddr(v->arch.hvm.vmx.msr_bitmap)); + load_vvmcs_host_state(v); if ( lm_l1 != lm_l2 ) diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h index 6b9c4ae0b2..812c1d49bd 100644 --- a/xen/include/asm-x86/hvm/vmx/vvmx.h +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h @@ -37,7 +37,8 @@ struct nestedvmx { */ paddr_t vmxon_region_pa; void *iobitmap[2]; /* map (va) of L1 guest I/O bitmap */ - void *msrbitmap; /* map (va) of L1 guest MSR bitmap */ + struct vmx_msr_bitmap *msrbitmap; /* map (va) of L1 guest MSR bitmap */ + struct page_info *msr_merged; /* merged L1 and L1 guest MSR bitmap */ /* deferred nested interrupt */ struct { unsigned long intr_info;