diff mbox series

[8/8] KVM: nSVM: read only changed fields of the nested guest data area

Message ID 20200820091327.197807-9-mlevitsk@redhat.com (mailing list archive)
State New, archived
Headers show
Series KVM: nSVM: ondemand nested state allocation + nested guest state caching | expand

Commit Message

Maxim Levitsky Aug. 20, 2020, 9:13 a.m. UTC
This allows us to only read fields that are marked as dirty by the nested
guest on vmentry.

I doubt that this has any perf impact but this way it is a bit closer
to real hardware.

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
 arch/x86/kvm/svm/nested.c | 58 +++++++++++++++++++++++++--------------
 arch/x86/kvm/svm/svm.c    |  2 +-
 arch/x86/kvm/svm/svm.h    |  5 ++++
 3 files changed, 44 insertions(+), 21 deletions(-)

Comments

Paolo Bonzini Aug. 20, 2020, 9:55 a.m. UTC | #1
On 20/08/20 11:13, Maxim Levitsky wrote:
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 06668e0f93e7..f0bb7f622dca 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3924,7 +3924,7 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
>  		if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL)
>  			return 1;
>  
> -		load_nested_vmcb(svm, map.hva, vmcb);
> +		load_nested_vmcb(svm, map.hva, vmcb_gpa);
>  		ret = enter_svm_guest_mode(svm);
>  

Wrong patch?

Paolo
Maxim Levitsky Aug. 20, 2020, 9:57 a.m. UTC | #2
On Thu, 2020-08-20 at 11:55 +0200, Paolo Bonzini wrote:
> On 20/08/20 11:13, Maxim Levitsky wrote:
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 06668e0f93e7..f0bb7f622dca 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -3924,7 +3924,7 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
> >  		if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL)
> >  			return 1;
> >  
> > -		load_nested_vmcb(svm, map.hva, vmcb);
> > +		load_nested_vmcb(svm, map.hva, vmcb_gpa);
> >  		ret = enter_svm_guest_mode(svm);
> >  
> 
> Wrong patch?

Absolutely. I reordered the refactoring patches to be at the beginning,
and didn't test this enough.

Best regards,
	Maxim Levitsky

> 
> Paolo
>
Paolo Bonzini Aug. 20, 2020, 10:01 a.m. UTC | #3
On 20/08/20 11:13, Maxim Levitsky wrote:
> +	u32 clean = nested_vmcb->control.clean;
> +
> +	if (svm->nested.vmcb_gpa != vmcb_gpa) {
> +		svm->nested.vmcb_gpa = vmcb_gpa;
> +		clean = 0;
> +	}

You probably should set clean to 0 also if the guest doesn't have the
VMCBCLEAN feature (so, you first need an extra patch to add the
VMCBCLEAN feature to cpufeatures.h).  It's probably best to cache the
guest vmcbclean in struct vcpu_svm, too.

Paolo
Maxim Levitsky Aug. 20, 2020, 10:05 a.m. UTC | #4
On Thu, 2020-08-20 at 12:01 +0200, Paolo Bonzini wrote:
> On 20/08/20 11:13, Maxim Levitsky wrote:
> > +	u32 clean = nested_vmcb->control.clean;
> > +
> > +	if (svm->nested.vmcb_gpa != vmcb_gpa) {
> > +		svm->nested.vmcb_gpa = vmcb_gpa;
> > +		clean = 0;
> > +	}
> 
> You probably should set clean to 0 also if the guest doesn't have the
> VMCBCLEAN feature (so, you first need an extra patch to add the
> VMCBCLEAN feature to cpufeatures.h).  It's probably best to cache the
> guest vmcbclean in struct vcpu_svm, too.

Right, I totally forgot about this one.

One thing why I made this patch optional, is that I can instead drop it,
and not 'read back' the saved area on vmexit, this will probably be faster
that what this optimization does. What do you think? Is this patch worth it?
(I submitted it because I already implemented this and wanted to hear opinion
on this).

Best regards,
	Maxim Levitsky

> 
> Paolo
>
Paolo Bonzini Aug. 20, 2020, 10:18 a.m. UTC | #5
On 20/08/20 12:05, Maxim Levitsky wrote:
>> You probably should set clean to 0 also if the guest doesn't have the
>> VMCBCLEAN feature (so, you first need an extra patch to add the
>> VMCBCLEAN feature to cpufeatures.h).  It's probably best to cache the
>> guest vmcbclean in struct vcpu_svm, too.
> Right, I totally forgot about this one.
> 
> One thing why I made this patch optional, is that I can instead drop it,
> and not 'read back' the saved area on vmexit, this will probably be faster
> that what this optimization does. What do you think? Is this patch worth it?
> (I submitted it because I already implemented this and wanted to hear opinion
> on this).

Yeah, good point.  It's one copy either way, either on vmexit (and
partly on vmentry depending on clean bits) or on vmentry.  I had not
considered the need to copy from vmcb02 to the cached vmcb12 on vmexit. :(

Let's shelve this for a bit, and revisit it once we have separate vmcb01
and vmcb02.  Then we might still use the clean bits to avoid copying
data from vmcb12 to vmcb02, including avoiding consistency checks
because we know the vmcb02 data is legit.

Patches 1-5 are still worthwhile, so you can clean them up and send them.

Paolo
Maxim Levitsky Aug. 20, 2020, 10:26 a.m. UTC | #6
On Thu, 2020-08-20 at 12:18 +0200, Paolo Bonzini wrote:
> On 20/08/20 12:05, Maxim Levitsky wrote:
> > > You probably should set clean to 0 also if the guest doesn't have the
> > > VMCBCLEAN feature (so, you first need an extra patch to add the
> > > VMCBCLEAN feature to cpufeatures.h).  It's probably best to cache the
> > > guest vmcbclean in struct vcpu_svm, too.
> > Right, I totally forgot about this one.
> > 
> > One thing why I made this patch optional, is that I can instead drop it,
> > and not 'read back' the saved area on vmexit, this will probably be faster
> > that what this optimization does. What do you think? Is this patch worth it?
> > (I submitted it because I already implemented this and wanted to hear opinion
> > on this).
> 
> Yeah, good point.  It's one copy either way, either on vmexit (and
> partly on vmentry depending on clean bits) or on vmentry.  I had not
> considered the need to copy from vmcb02 to the cached vmcb12 on vmexit. :(
> 
> Let's shelve this for a bit, and revisit it once we have separate vmcb01
> and vmcb02.  Then we might still use the clean bits to avoid copying
> data from vmcb12 to vmcb02, including avoiding consistency checks
> because we know the vmcb02 data is legit.
It makes sense I guess. The vmcb02 would then play the role of the cache of
vmcb12

> 
> Patches 1-5 are still worthwhile, so you can clean them up and send them.
> 
> Paolo

OK, on it now.

Best regards,
	Maxim Levitsky
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index acc4b26fcfcc..f3eef48caee6 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -266,40 +266,57 @@  static void load_nested_vmcb_control(struct vcpu_svm *svm,
 }
 
 static void load_nested_vmcb_save(struct vcpu_svm *svm,
-				  struct vmcb_save_area *save)
+				  struct vmcb_save_area *save,
+				  u32 clean)
 {
 	svm->nested.vmcb->save.rflags = save->rflags;
 	svm->nested.vmcb->save.rax    = save->rax;
 	svm->nested.vmcb->save.rsp    = save->rsp;
 	svm->nested.vmcb->save.rip    = save->rip;
 
-	svm->nested.vmcb->save.es  = save->es;
-	svm->nested.vmcb->save.cs  = save->cs;
-	svm->nested.vmcb->save.ss  = save->ss;
-	svm->nested.vmcb->save.ds  = save->ds;
-	svm->nested.vmcb->save.cpl = save->cpl;
+	if (is_dirty(clean, VMCB_SEG)) {
+		svm->nested.vmcb->save.es  = save->es;
+		svm->nested.vmcb->save.cs  = save->cs;
+		svm->nested.vmcb->save.ss  = save->ss;
+		svm->nested.vmcb->save.ds  = save->ds;
+		svm->nested.vmcb->save.cpl = save->cpl;
+	}
 
-	svm->nested.vmcb->save.gdtr = save->gdtr;
-	svm->nested.vmcb->save.idtr = save->idtr;
+	if (is_dirty(clean, VMCB_DT)) {
+		svm->nested.vmcb->save.gdtr = save->gdtr;
+		svm->nested.vmcb->save.idtr = save->idtr;
+	}
 
-	svm->nested.vmcb->save.efer = save->efer;
-	svm->nested.vmcb->save.cr3 = save->cr3;
-	svm->nested.vmcb->save.cr4 = save->cr4;
-	svm->nested.vmcb->save.cr0 = save->cr0;
+	if (is_dirty(clean, VMCB_CR)) {
+		svm->nested.vmcb->save.efer = save->efer;
+		svm->nested.vmcb->save.cr3 = save->cr3;
+		svm->nested.vmcb->save.cr4 = save->cr4;
+		svm->nested.vmcb->save.cr0 = save->cr0;
+	}
 
-	svm->nested.vmcb->save.cr2 = save->cr2;
+	if (is_dirty(clean, VMCB_CR2))
+		svm->nested.vmcb->save.cr2 = save->cr2;
 
-	svm->nested.vmcb->save.dr7 = save->dr7;
-	svm->nested.vmcb->save.dr6 = save->dr6;
+	if (is_dirty(clean, VMCB_DR)) {
+		svm->nested.vmcb->save.dr7 = save->dr7;
+		svm->nested.vmcb->save.dr6 = save->dr6;
+	}
 
-	svm->nested.vmcb->save.g_pat = save->g_pat;
+	if ((clean & VMCB_NPT) == 0)
+		svm->nested.vmcb->save.g_pat = save->g_pat;
 }
 
 void load_nested_vmcb(struct vcpu_svm *svm, struct vmcb *nested_vmcb, u64 vmcb_gpa)
 {
-	svm->nested.vmcb_gpa = vmcb_gpa;
+	u32 clean = nested_vmcb->control.clean;
+
+	if (svm->nested.vmcb_gpa != vmcb_gpa) {
+		svm->nested.vmcb_gpa = vmcb_gpa;
+		clean = 0;
+	}
+
 	load_nested_vmcb_control(svm, &nested_vmcb->control);
-	load_nested_vmcb_save(svm, &nested_vmcb->save);
+	load_nested_vmcb_save(svm, &nested_vmcb->save, clean);
 }
 
 /*
@@ -619,7 +636,6 @@  int nested_svm_vmexit(struct vcpu_svm *svm)
 
 	/* Exit Guest-Mode */
 	leave_guest_mode(&svm->vcpu);
-	svm->nested.vmcb_gpa = 0;
 	WARN_ON_ONCE(svm->nested.nested_run_pending);
 
 	/* in case we halted in L2 */
@@ -676,7 +692,7 @@  int nested_svm_vmexit(struct vcpu_svm *svm)
 	 * Note: since CPU might have changed the values we can't
 	 * trust clean bits
 	 */
-	load_nested_vmcb_save(svm, &nested_vmcb->save);
+	load_nested_vmcb_save(svm, &nested_vmcb->save, 0);
 
 	/* Restore the original control entries */
 	copy_vmcb_control_area(&vmcb->control, &hsave->control);
@@ -759,6 +775,7 @@  int svm_allocate_nested(struct vcpu_svm *svm)
 		goto free_page3;
 
 	svm->nested.vmcb = page_address(vmcb_page);
+	svm->nested.vmcb_gpa = U64_MAX;
 	clear_page(svm->nested.vmcb);
 
 	svm->nested.initialized = true;
@@ -785,6 +802,7 @@  void svm_free_nested(struct vcpu_svm *svm)
 
 	__free_page(virt_to_page(svm->nested.vmcb));
 	svm->nested.vmcb = NULL;
+	svm->nested.vmcb_gpa = U64_MAX;
 
 	svm->nested.initialized = false;
 }
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 06668e0f93e7..f0bb7f622dca 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3924,7 +3924,7 @@  static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 		if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL)
 			return 1;
 
-		load_nested_vmcb(svm, map.hva, vmcb);
+		load_nested_vmcb(svm, map.hva, vmcb_gpa);
 		ret = enter_svm_guest_mode(svm);
 
 		kvm_vcpu_unmap(&svm->vcpu, &map, true);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 80231ef8de6f..4a383c519fdf 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -204,6 +204,11 @@  static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
 	vmcb->control.clean &= ~(1 << bit);
 }
 
+static inline bool is_dirty(u32 clean, int bit)
+{
+	return (clean & (1 << bit)) == 0;
+}
+
 static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
 {
 	return container_of(vcpu, struct vcpu_svm, vcpu);