Message ID | 20190424231724.2014-8-krish.sadhukhan@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/8,KVMnVMX] : Enable "load IA32_PERF_GLOBAL_CTRL" VM-exit control for nested guests | expand |
On Wed, Apr 24, 2019 at 07:17:23PM -0400, Krish Sadhukhan wrote: > ...based on whether the guest CPU supports PMU > > Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> > Suggested-by: Jim Mattson <jmattson@google.com> > Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> > --- > arch/x86/kvm/vmx/vmx.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 4d39f731bc33..fa9c786afcfa 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -6964,6 +6964,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) > static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); A revert has been sent for the patch that added guest_cpuid_has_pmu(). Regardless, checking only the guest's CPUID 0xA is not sufficient, e.g. at the bare minimum, exposing the controls can be done if and only if cpu_has_load_perf_global_ctrl() is true. In general, it's difficult for me to understand exactly what functionality you intend to introduce. Proper changelogs would be very helpful. > > if (kvm_mpx_supported()) { > bool mpx_enabled = guest_cpuid_has(vcpu, X86_FEATURE_MPX); > @@ -6976,6 +6977,17 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > vmx->nested.msrs.exit_ctls_high &= ~VM_EXIT_CLEAR_BNDCFGS; > } > } > + if (pmu_enabled) { > + vmx->nested.msrs.entry_ctls_high |= > + VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; > + vmx->nested.msrs.exit_ctls_high |= > + VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; > + } else { > + vmx->nested.msrs.entry_ctls_high &= > + ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; > + vmx->nested.msrs.exit_ctls_high &= > + ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; > + } > } > > static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) > -- > 2.17.2 >
On Mon, May 13, 2019 at 12:12 PM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > On Wed, Apr 24, 2019 at 07:17:23PM -0400, Krish Sadhukhan wrote: > > ...based on whether the guest CPU supports PMU > > > > Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> > > Suggested-by: Jim Mattson <jmattson@google.com> > > Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> > > --- > > arch/x86/kvm/vmx/vmx.c | 12 ++++++++++++ > > 1 file changed, 12 insertions(+) > > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > > index 4d39f731bc33..fa9c786afcfa 100644 > > --- a/arch/x86/kvm/vmx/vmx.c > > +++ b/arch/x86/kvm/vmx/vmx.c > > @@ -6964,6 +6964,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) > > static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > > { > > struct vcpu_vmx *vmx = to_vmx(vcpu); > > + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); > > A revert has been sent for the patch that added guest_cpuid_has_pmu(). > > Regardless, checking only the guest's CPUID 0xA is not sufficient, e.g. > at the bare minimum, exposing the controls can be done if and only if > cpu_has_load_perf_global_ctrl() is true. I don't think cpu_has_load_perf_global_ctrl() is required. Support for the VM-entry and VM-exit controls, "load IA32_PERF_GLOBAL_CTRL," can be completely emulated by kvm, since add_atomic_switch_msr() is capable of falling back on the VM-entry and VM-exit MSR-load lists if !cpu_has_load_perf_global_ctrl(). The only requirement should be kvm_pmu_is_valid_msr(MSR_CORE_PERF_GLOBAL_CTRL). > In general, it's difficult for me to understand exactly what functionality > you intend to introduce. Proper changelogs would be very helpful. I concur!
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4d39f731bc33..fa9c786afcfa 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6964,6 +6964,7 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu) static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); if (kvm_mpx_supported()) { bool mpx_enabled = guest_cpuid_has(vcpu, X86_FEATURE_MPX); @@ -6976,6 +6977,17 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) vmx->nested.msrs.exit_ctls_high &= ~VM_EXIT_CLEAR_BNDCFGS; } } + if (pmu_enabled) { + vmx->nested.msrs.entry_ctls_high |= + VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; + vmx->nested.msrs.exit_ctls_high |= + VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + } else { + vmx->nested.msrs.entry_ctls_high &= + ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; + vmx->nested.msrs.exit_ctls_high &= + ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + } } static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)