diff mbox

[05/10] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap

Message ID 1519235241-6500-6-git-send-email-karahmed@amazon.de (mailing list archive)
State New, archived
Headers show

Commit Message

KarimAllah Ahmed Feb. 21, 2018, 5:47 p.m. UTC
... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest
memory that has a "struct page".

The life-cycle of the mapping also changes to avoid doing map and unmap on
every single exit (which becomes very expesive once we use memremap). Now
the memory is mapped and only unmapped when a new VMCS12 is loaded into the
vCPU (or when the vCPU is freed!).

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
---
 arch/x86/kvm/vmx.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

Comments

Konrad Rzeszutek Wilk Feb. 23, 2018, 9:36 p.m. UTC | #1
On Wed, Feb 21, 2018 at 06:47:16PM +0100, KarimAllah Ahmed wrote:
> ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest
> memory that has a "struct page".
> 
> The life-cycle of the mapping also changes to avoid doing map and unmap on

s/also changes/has been changed/ ?
KarimAllah Ahmed Feb. 23, 2018, 11:45 p.m. UTC | #2
On Fri, 2018-02-23 at 16:36 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 21, 2018 at 06:47:16PM +0100, KarimAllah Ahmed wrote:

> > 

> > ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest

> > memory that has a "struct page".

> > 

> > The life-cycle of the mapping also changes to avoid doing map and unmap on

> 

> s/also changes/has been changed/ ?


Right, will fix in v2.

Thanks.
Amazon Development Center Germany GmbH
Berlin - Dresden - Aachen
main office: Krausenstr. 38, 10117 Berlin
Geschaeftsfuehrer: Dr. Ralf Herbrich, Christian Schlaeger
Ust-ID: DE289237879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B
Paolo Bonzini April 12, 2018, 2:36 p.m. UTC | #3
On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> ... since using kvm_vcpu_gpa_to_page() and kmap() will only work for guest
> memory that has a "struct page".
> 
> The life-cycle of the mapping also changes to avoid doing map and unmap on
> every single exit (which becomes very expesive once we use memremap). Now
> the memory is mapped and only unmapped when a new VMCS12 is loaded into the
> vCPU (or when the vCPU is freed!).

In this particular case SMM is not an issue because it cannot use VMX.
Therefore it's safe to ignore non-SMM address spaces.  You can then
introduce

int kvm_map_gfn(struct kvm_vcpu *vcpu gfn_t gfn,
		struct kvm_host_map *map)

	calling kvm_gfn_to_memslot + __kvm_map_gfn

which could also handle the caching aspect.  But please let's look at it
later, making the lifecycle change separate from the new API.

Paolo

> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
> ---
>  arch/x86/kvm/vmx.c | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
diff mbox

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 0a98d1a..4bfef58 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -462,6 +462,9 @@  struct nested_vmx {
 	struct page *apic_access_page;
 	struct page *virtual_apic_page;
 	struct page *pi_desc_page;
+
+	struct kvm_host_map msr_bitmap_map;
+
 	struct pi_desc *pi_desc;
 	bool pi_pending;
 	u16 posted_intr_nv;
@@ -7664,6 +7667,8 @@  static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
 				  vmx->nested.current_vmptr >> PAGE_SHIFT,
 				  vmx->nested.cached_vmcs12, 0, VMCS12_SIZE);
 
+	kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map);
+
 	vmx->nested.current_vmptr = -1ull;
 }
 
@@ -7704,6 +7709,8 @@  static void free_nested(struct vcpu_vmx *vmx)
 		vmx->nested.pi_desc = NULL;
 	}
 
+	kvm_vcpu_unmap(&vmx->nested.msr_bitmap_map);
+
 	free_loaded_vmcs(&vmx->nested.vmcs02);
 }
 
@@ -10374,9 +10381,10 @@  static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
 						 struct vmcs12 *vmcs12)
 {
 	int msr;
-	struct page *page;
 	unsigned long *msr_bitmap_l1;
 	unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;
+	struct kvm_host_map *map = &to_vmx(vcpu)->nested.msr_bitmap_map;
+
 	/*
 	 * pred_cmd & spec_ctrl are trying to verify two things:
 	 *
@@ -10402,11 +10410,11 @@  static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
 	    !pred_cmd && !spec_ctrl)
 		return false;
 
-	page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap);
-	if (is_error_page(page))
+	if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), map))
 		return false;
 
-	msr_bitmap_l1 = (unsigned long *)kmap(page);
+	msr_bitmap_l1 = (unsigned long *)map->kaddr;
+
 	if (nested_cpu_has_apic_reg_virt(vmcs12)) {
 		/*
 		 * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
@@ -10454,9 +10462,6 @@  static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
 					MSR_IA32_PRED_CMD,
 					MSR_TYPE_W);
 
-	kunmap(page);
-	kvm_release_page_clean(page);
-
 	return true;
 }