Message ID | 20210520005012.68377-3-krish.sadhukhan@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: nVMX: nSVM: Add more statistics to KVM debugfs | expand |
On Wed, May 19, 2021, Krish Sadhukhan wrote: > Currently, the 'nested_run' statistic counts all guest-entry attempts, > including those that fail during vmentry checks on Intel and during > consistency checks on AMD. Convert this statistic to count only those > guest-entries that make it past these state checks and make it to guest > code. This will tell us the number of guest-entries that actually executed > or tried to execute guest code. > > Also, rename this statistic to 'nested_runs' since it is a count. > > Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com> > Suggested-by: Sean Christopherson <seanjc@google.com> I didn't suggest this, or at least I didn't intend to. My point was that _if_ we want stat.nested_run to count successful VM-Enter then the counter should be bumped only after VM-Enter succeeds. I did not mean to say that we should actually do this. As Dongli pointed out[*], there is value in tracking attempts, and I still don't understand _why_ we care about only counting successful VM-Enters. FWIW, this misses the case where L1 enters L2 in an inactive state. [*] ed4a8dae-be99-0d88-a8dd-510afe7cb956@oracle.com
On 5/20/21 7:53 AM, Sean Christopherson wrote: > On Wed, May 19, 2021, Krish Sadhukhan wrote: >> Currently, the 'nested_run' statistic counts all guest-entry attempts, >> including those that fail during vmentry checks on Intel and during >> consistency checks on AMD. Convert this statistic to count only those >> guest-entries that make it past these state checks and make it to guest >> code. This will tell us the number of guest-entries that actually executed >> or tried to execute guest code. >> >> Also, rename this statistic to 'nested_runs' since it is a count. >> >> Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com> >> Suggested-by: Sean Christopherson <seanjc@google.com> > I didn't suggest this, or at least I didn't intend to. The idea behind my v1 patch was actually to eliminate failed attempts of vmentry but of course I didn't place the counter in the right location. In v2, I placed it based on your suggestion. So, the burden of the intention to count only successful vmentries still falls on me
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 55efbacfc244..cf8557b2b90f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1170,7 +1170,7 @@ struct kvm_vcpu_stat { u64 req_event; u64 halt_poll_success_ns; u64 halt_poll_fail_ns; - u64 nested_run; + u64 nested_runs; u64 directed_yield_attempted; u64 directed_yield_successful; }; diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 5e8d8443154e..34fc74b0d58a 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -596,8 +596,6 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) struct kvm_host_map map; u64 vmcb12_gpa; - ++vcpu->stat.nested_run; - if (is_smm(vcpu)) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4dd9b7856e5b..57c351640355 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3872,6 +3872,13 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) svm->next_rip = 0; if (is_guest_mode(vcpu)) { nested_sync_control_from_vmcb02(svm); + + /* Track VMRUNs that have made past consistency checking */ + if (svm->nested.nested_run_pending && + svm->vmcb->control.exit_code != SVM_EXIT_ERR && + svm->vmcb->control.exit_code != SVM_EXIT_NPF) + ++vcpu->stat.nested_runs; + svm->nested.nested_run_pending = 0; } diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 6058a65a6ede..94f70c0af4a4 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3454,8 +3454,6 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) u32 interrupt_shadow = vmx_get_interrupt_shadow(vcpu); enum nested_evmptrld_status evmptrld_status; - ++vcpu->stat.nested_run; - if (!nested_vmx_check_permission(vcpu)) return 1; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index af2be5930ba4..fa8df7ab2756 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6839,8 +6839,17 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) kvm_load_host_xsave_state(vcpu); - if (is_guest_mode(vcpu)) + if (is_guest_mode(vcpu)) { + /* + * Track VMLAUNCH/VMRESUME that have made past guest state + * checking. + */ + if (vmx->nested.nested_run_pending && + !vmx->exit_reason.failed_vmentry) + ++vcpu->stat.nested_runs; + vmx->nested.nested_run_pending = 0; + } vmx->idt_vectoring_info = 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5bd550eaf683..6d1f51f6c344 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -243,7 +243,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { VCPU_STAT("l1d_flush", l1d_flush), VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns), VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns), - VCPU_STAT("nested_run", nested_run), + VCPU_STAT("nested_runs", nested_runs), VCPU_STAT("directed_yield_attempted", directed_yield_attempted), VCPU_STAT("directed_yield_successful", directed_yield_successful), VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped),
Currently, the 'nested_run' statistic counts all guest-entry attempts, including those that fail during vmentry checks on Intel and during consistency checks on AMD. Convert this statistic to count only those guest-entries that make it past these state checks and make it to guest code. This will tell us the number of guest-entries that actually executed or tried to execute guest code. Also, rename this statistic to 'nested_runs' since it is a count. Signed-off-by: Krish Sadhukhan <Krish.Sadhukhan@oracle.com> Suggested-by: Sean Christopherson <seanjc@google.com> --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/nested.c | 2 -- arch/x86/kvm/svm/svm.c | 7 +++++++ arch/x86/kvm/vmx/nested.c | 2 -- arch/x86/kvm/vmx/vmx.c | 11 ++++++++++- arch/x86/kvm/x86.c | 2 +- 6 files changed, 19 insertions(+), 7 deletions(-)