diff mbox series

[1/3] KVM: nVMX: Move 'nested_run' counter to enter_guest_mode()

Message ID 20210512014759.55556-2-krish.sadhukhan@oracle.com (mailing list archive)
State New, archived
Headers show
Series KVM: nVMX: Add more statistics to KVM debugfs | expand

Commit Message

Krish Sadhukhan May 12, 2021, 1:47 a.m. UTC
Move 'nested_run' counter to enter_guest_mode() because,
    i) This counter is common to both Intel and AMD and can be incremented
       from a common place,
    ii) guest mode is a more finer-grained state than the beginning of
	nested_svm_vmrun() and nested_vmx_run().

Also, rename it to 'nested_runs'.

Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/kvm_cache_regs.h   | 1 +
 arch/x86/kvm/svm/nested.c       | 2 --
 arch/x86/kvm/vmx/nested.c       | 2 --
 arch/x86/kvm/x86.c              | 2 +-
 5 files changed, 3 insertions(+), 6 deletions(-)

Comments

Sean Christopherson May 12, 2021, 6:30 p.m. UTC | #1
On Tue, May 11, 2021, Krish Sadhukhan wrote:
> Move 'nested_run' counter to enter_guest_mode() because,
>     i) This counter is common to both Intel and AMD and can be incremented
>        from a common place,
>     ii) guest mode is a more finer-grained state than the beginning of
> 	nested_svm_vmrun() and nested_vmx_run().

Hooking enter_guest_mode() makes the name a misnomer since it will count cases
such as setting nested state and resuming from SMI, neither of which is a nested
run in the sense of L1 deliberately choosing to run L2.

And while bumping nested_run at the very beginning of VMLAUNCH/VMRESUME/VMRUN is
arguably wrong in that it counts _attempts_ instead of successful VM-Enters, it's
at least consistent.  Moving this to enter_guest_mode() means it's arbitrarily
counting VM-Enter that fails late, but not those that fail early.

If we really want it to mean "successful VM-Enter", then we should wait until
after VM-Enter actual succeeds, and do it only for actual VM-Enter.
Dongli Zhang May 13, 2021, 4:02 p.m. UTC | #2
On 5/12/21 11:30 AM, Sean Christopherson wrote:
> On Tue, May 11, 2021, Krish Sadhukhan wrote:
>> Move 'nested_run' counter to enter_guest_mode() because,
>>     i) This counter is common to both Intel and AMD and can be incremented
>>        from a common place,
>>     ii) guest mode is a more finer-grained state than the beginning of
>> 	nested_svm_vmrun() and nested_vmx_run().
> 
> Hooking enter_guest_mode() makes the name a misnomer since it will count cases
> such as setting nested state and resuming from SMI, neither of which is a nested
> run in the sense of L1 deliberately choosing to run L2.
> 
> And while bumping nested_run at the very beginning of VMLAUNCH/VMRESUME/VMRUN is
> arguably wrong in that it counts _attempts_ instead of successful VM-Enters, it's

Yes, the original purpose was to track the attempt. That's why the counter is
incremented at the beginning. It helps tell if there is any attempt to run L2 VM
(and also if VM is actively running L2 VM by monitoring the counter).

This helps as sometimes VM owner does not realize software like jailhouse will
involve the L2 VM.

Without the counter, we may need to temporarily use
"/sys/kernel/debug/kvm/mmu_unsync" assuming shadow page table is not used by
host if L2 VM is not involved.

Dongli Zhang

> at least consistent.  Moving this to enter_guest_mode() means it's arbitrarily
> counting VM-Enter that fails late, but not those that fail early.
> 
> If we really want it to mean "successful VM-Enter", then we should wait until
> after VM-Enter actual succeeds, and do it only for actual VM-Enter.
>
diff mbox series

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 55efbacfc244..cf8557b2b90f 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1170,7 +1170,7 @@  struct kvm_vcpu_stat {
 	u64 req_event;
 	u64 halt_poll_success_ns;
 	u64 halt_poll_fail_ns;
-	u64 nested_run;
+	u64 nested_runs;
 	u64 directed_yield_attempted;
 	u64 directed_yield_successful;
 };
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 3db5c42c9ecd..cf52cbff18d3 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -162,6 +162,7 @@  static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
 static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hflags |= HF_GUEST_MASK;
+	++vcpu->stat.nested_runs;
 }
 
 static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5e8d8443154e..34fc74b0d58a 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -596,8 +596,6 @@  int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 	struct kvm_host_map map;
 	u64 vmcb12_gpa;
 
-	++vcpu->stat.nested_run;
-
 	if (is_smm(vcpu)) {
 		kvm_queue_exception(vcpu, UD_VECTOR);
 		return 1;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 6058a65a6ede..94f70c0af4a4 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -3454,8 +3454,6 @@  static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 	u32 interrupt_shadow = vmx_get_interrupt_shadow(vcpu);
 	enum nested_evmptrld_status evmptrld_status;
 
-	++vcpu->stat.nested_run;
-
 	if (!nested_vmx_check_permission(vcpu))
 		return 1;
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5bd550eaf683..6d1f51f6c344 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -243,7 +243,7 @@  struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("l1d_flush", l1d_flush),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
-	VCPU_STAT("nested_run", nested_run),
+	VCPU_STAT("nested_runs", nested_runs),
 	VCPU_STAT("directed_yield_attempted", directed_yield_attempted),
 	VCPU_STAT("directed_yield_successful", directed_yield_successful),
 	VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),