Message ID | 20190325190917.144262-1-liran.alon@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU | expand |
Ping. > On 25 Mar 2019, at 21:09, Liran Alon <liran.alon@oracle.com> wrote: > > Issue was discovered when running kvm-unit-tests on KVM running as L1 on > top of Hyper-V. > > When vmx_instruction_intercept unit-test attempts to run RDPMC to test > RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC > handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU. > Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC. > > The reason vmx_instruction_intercept unit-test attempts to run RDPMC > even though Hyper-V doesn't support PMU is because L1 expose to L2 > support for RDPMC-exiting. Which is reasonable to assume that is > supported only in case CPU supports PMU to being with. > > Above issue can easily be simulated by modifying > vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with > "-cpu host,+vmx,-pmu" and run unit-test. > > To handle issue, change KVM to expose RDPMC-exiting only when guest > supports PMU. > > Reported-by: Saar Amar <saaramar@microsoft.com> > Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> > Signed-off-by: Liran Alon <liran.alon@oracle.com> > --- > arch/x86/kvm/vmx/vmx.c | 25 +++++++++++++++++++++++++ > 1 file changed, 25 insertions(+) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index f6915f10e584..2634ee8c9dc8 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -6978,6 +6978,30 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > } > } > > +static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu) > +{ > + struct kvm_cpuid_entry2 *entry; > + union cpuid10_eax eax; > + > + entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); > + if (!entry) > + return false; > + > + eax.full = entry->eax; > + return (eax.split.version_id > 0); > +} > + > +static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_vmx *vmx = to_vmx(vcpu); > + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); > + > + if (pmu_enabled) > + vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING; > + else > + vmx->nested.msrs.procbased_ctls_high &= ~CPU_BASED_RDPMC_EXITING; > +} > + > static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > @@ -7066,6 +7090,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) > if (nested_vmx_allowed(vcpu)) { > nested_vmx_cr_fixed1_bits_update(vcpu); > nested_vmx_entry_exit_ctls_update(vcpu); > + nested_vmx_procbased_ctls_update(vcpu); > } > > if (boot_cpu_has(X86_FEATURE_INTEL_PT) && > -- > 2.20.1 >
On Mon, Apr 1, 2019 at 7:51 AM Liran Alon <liran.alon@oracle.com> wrote: > > Ping. > > > On 25 Mar 2019, at 21:09, Liran Alon <liran.alon@oracle.com> wrote: > > > > Issue was discovered when running kvm-unit-tests on KVM running as L1 on > > top of Hyper-V. > > > > When vmx_instruction_intercept unit-test attempts to run RDPMC to test > > RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC > > handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU. > > Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC. > > > > The reason vmx_instruction_intercept unit-test attempts to run RDPMC > > even though Hyper-V doesn't support PMU is because L1 expose to L2 > > support for RDPMC-exiting. Which is reasonable to assume that is > > supported only in case CPU supports PMU to being with. > > > > Above issue can easily be simulated by modifying > > vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with > > "-cpu host,+vmx,-pmu" and run unit-test. > > > > To handle issue, change KVM to expose RDPMC-exiting only when guest > > supports PMU. > > > > Reported-by: Saar Amar <saaramar@microsoft.com> > > Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> > > Signed-off-by: Liran Alon <liran.alon@oracle.com> > > --- > > arch/x86/kvm/vmx/vmx.c | 25 +++++++++++++++++++++++++ > > 1 file changed, 25 insertions(+) > > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > > index f6915f10e584..2634ee8c9dc8 100644 > > --- a/arch/x86/kvm/vmx/vmx.c > > +++ b/arch/x86/kvm/vmx/vmx.c > > @@ -6978,6 +6978,30 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > > } > > } > > > > +static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu) > > +{ > > + struct kvm_cpuid_entry2 *entry; > > + union cpuid10_eax eax; > > + > > + entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); > > + if (!entry) > > + return false; > > + > > + eax.full = entry->eax; > > + return (eax.split.version_id > 0); > > +} > > + > > +static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu) > > +{ > > + struct vcpu_vmx *vmx = to_vmx(vcpu); > > + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); > > + > > + if (pmu_enabled) > > + vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING; Doesn't this require that L0 supports RDPMC exiting, so that the bit can be passed from vmcs12 to vmcs02? > > + else > > + vmx->nested.msrs.procbased_ctls_high &= ~CPU_BASED_RDPMC_EXITING; > > +} > > + > > static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) > > { > > struct vcpu_vmx *vmx = to_vmx(vcpu); > > @@ -7066,6 +7090,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) > > if (nested_vmx_allowed(vcpu)) { > > nested_vmx_cr_fixed1_bits_update(vcpu); > > nested_vmx_entry_exit_ctls_update(vcpu); > > + nested_vmx_procbased_ctls_update(vcpu); > > } > > > > if (boot_cpu_has(X86_FEATURE_INTEL_PT) && > > -- > > 2.20.1 > > >
> On 1 Apr 2019, at 22:46, Jim Mattson <jmattson@google.com> wrote: > > On Mon, Apr 1, 2019 at 7:51 AM Liran Alon <liran.alon@oracle.com> wrote: >> >> Ping. >> >>> On 25 Mar 2019, at 21:09, Liran Alon <liran.alon@oracle.com> wrote: >>> >>> Issue was discovered when running kvm-unit-tests on KVM running as L1 on >>> top of Hyper-V. >>> >>> When vmx_instruction_intercept unit-test attempts to run RDPMC to test >>> RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC >>> handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU. >>> Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC. >>> >>> The reason vmx_instruction_intercept unit-test attempts to run RDPMC >>> even though Hyper-V doesn't support PMU is because L1 expose to L2 >>> support for RDPMC-exiting. Which is reasonable to assume that is >>> supported only in case CPU supports PMU to being with. >>> >>> Above issue can easily be simulated by modifying >>> vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with >>> "-cpu host,+vmx,-pmu" and run unit-test. >>> >>> To handle issue, change KVM to expose RDPMC-exiting only when guest >>> supports PMU. >>> >>> Reported-by: Saar Amar <saaramar@microsoft.com> >>> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> >>> Signed-off-by: Liran Alon <liran.alon@oracle.com> >>> --- >>> arch/x86/kvm/vmx/vmx.c | 25 +++++++++++++++++++++++++ >>> 1 file changed, 25 insertions(+) >>> >>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c >>> index f6915f10e584..2634ee8c9dc8 100644 >>> --- a/arch/x86/kvm/vmx/vmx.c >>> +++ b/arch/x86/kvm/vmx/vmx.c >>> @@ -6978,6 +6978,30 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) >>> } >>> } >>> >>> +static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu) >>> +{ >>> + struct kvm_cpuid_entry2 *entry; >>> + union cpuid10_eax eax; >>> + >>> + entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); >>> + if (!entry) >>> + return false; >>> + >>> + eax.full = entry->eax; >>> + return (eax.split.version_id > 0); >>> +} >>> + >>> +static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu) >>> +{ >>> + struct vcpu_vmx *vmx = to_vmx(vcpu); >>> + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); >>> + >>> + if (pmu_enabled) >>> + vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING; > > Doesn't this require that L0 supports RDPMC exiting, so that the bit > can be passed from vmcs12 to vmcs02? Looking at setup_vmcs_config(), CPU_BASED_RDPMC_EXITING is defined as part of minimum set of features that is required for KVM to run. So I can safely assume here that vmcs02 have RDPMC-exiting. Am I missing something? -Liran > >>> + else >>> + vmx->nested.msrs.procbased_ctls_high &= ~CPU_BASED_RDPMC_EXITING; >>> +} >>> + >>> static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) >>> { >>> struct vcpu_vmx *vmx = to_vmx(vcpu); >>> @@ -7066,6 +7090,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) >>> if (nested_vmx_allowed(vcpu)) { >>> nested_vmx_cr_fixed1_bits_update(vcpu); >>> nested_vmx_entry_exit_ctls_update(vcpu); >>> + nested_vmx_procbased_ctls_update(vcpu); >>> } >>> >>> if (boot_cpu_has(X86_FEATURE_INTEL_PT) && >>> -- >>> 2.20.1 >>> >>
On Mon, Apr 1, 2019 at 12:56 PM Liran Alon <liran.alon@oracle.com> wrote: > > > > > On 1 Apr 2019, at 22:46, Jim Mattson <jmattson@google.com> wrote: > > > > On Mon, Apr 1, 2019 at 7:51 AM Liran Alon <liran.alon@oracle.com> wrote: > >> > >> Ping. > >> > >>> On 25 Mar 2019, at 21:09, Liran Alon <liran.alon@oracle.com> wrote: > >>> > >>> Issue was discovered when running kvm-unit-tests on KVM running as L1 on > >>> top of Hyper-V. > >>> > >>> When vmx_instruction_intercept unit-test attempts to run RDPMC to test > >>> RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC > >>> handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU. > >>> Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC. > >>> > >>> The reason vmx_instruction_intercept unit-test attempts to run RDPMC > >>> even though Hyper-V doesn't support PMU is because L1 expose to L2 > >>> support for RDPMC-exiting. Which is reasonable to assume that is > >>> supported only in case CPU supports PMU to being with. > >>> > >>> Above issue can easily be simulated by modifying > >>> vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with > >>> "-cpu host,+vmx,-pmu" and run unit-test. > >>> > >>> To handle issue, change KVM to expose RDPMC-exiting only when guest > >>> supports PMU. > >>> > >>> Reported-by: Saar Amar <saaramar@microsoft.com> > >>> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> > >>> Signed-off-by: Liran Alon <liran.alon@oracle.com> > >>> --- > >>> arch/x86/kvm/vmx/vmx.c | 25 +++++++++++++++++++++++++ > >>> 1 file changed, 25 insertions(+) > >>> > >>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > >>> index f6915f10e584..2634ee8c9dc8 100644 > >>> --- a/arch/x86/kvm/vmx/vmx.c > >>> +++ b/arch/x86/kvm/vmx/vmx.c > >>> @@ -6978,6 +6978,30 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > >>> } > >>> } > >>> > >>> +static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu) > >>> +{ > >>> + struct kvm_cpuid_entry2 *entry; > >>> + union cpuid10_eax eax; > >>> + > >>> + entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); > >>> + if (!entry) > >>> + return false; > >>> + > >>> + eax.full = entry->eax; > >>> + return (eax.split.version_id > 0); > >>> +} > >>> + > >>> +static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu) > >>> +{ > >>> + struct vcpu_vmx *vmx = to_vmx(vcpu); > >>> + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); > >>> + > >>> + if (pmu_enabled) > >>> + vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING; > > > > Doesn't this require that L0 supports RDPMC exiting, so that the bit > > can be passed from vmcs12 to vmcs02? > > Looking at setup_vmcs_config(), CPU_BASED_RDPMC_EXITING is defined as part of minimum set of features > that is required for KVM to run. So I can safely assume here that vmcs02 have RDPMC-exiting. > > Am I missing something? > -Liran No. That answers my question. Reviewed-by: Jim Mattson <jmattson@google.com>
On 25/03/19 20:09, Liran Alon wrote: > Issue was discovered when running kvm-unit-tests on KVM running as L1 on > top of Hyper-V. > > When vmx_instruction_intercept unit-test attempts to run RDPMC to test > RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC > handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU. > Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC. > > The reason vmx_instruction_intercept unit-test attempts to run RDPMC > even though Hyper-V doesn't support PMU is because L1 expose to L2 > support for RDPMC-exiting. Which is reasonable to assume that is > supported only in case CPU supports PMU to being with. > > Above issue can easily be simulated by modifying > vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with > "-cpu host,+vmx,-pmu" and run unit-test. > > To handle issue, change KVM to expose RDPMC-exiting only when guest > supports PMU. > > Reported-by: Saar Amar <saaramar@microsoft.com> > Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> > Signed-off-by: Liran Alon <liran.alon@oracle.com> > --- > arch/x86/kvm/vmx/vmx.c | 25 +++++++++++++++++++++++++ > 1 file changed, 25 insertions(+) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index f6915f10e584..2634ee8c9dc8 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -6978,6 +6978,30 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) > } > } > > +static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu) > +{ > + struct kvm_cpuid_entry2 *entry; > + union cpuid10_eax eax; > + > + entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); > + if (!entry) > + return false; > + > + eax.full = entry->eax; > + return (eax.split.version_id > 0); > +} > + > +static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_vmx *vmx = to_vmx(vcpu); > + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); > + > + if (pmu_enabled) > + vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING; > + else > + vmx->nested.msrs.procbased_ctls_high &= ~CPU_BASED_RDPMC_EXITING; > +} > + > static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > @@ -7066,6 +7090,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) > if (nested_vmx_allowed(vcpu)) { > nested_vmx_cr_fixed1_bits_update(vcpu); > nested_vmx_entry_exit_ctls_update(vcpu); > + nested_vmx_procbased_ctls_update(vcpu); > } > > if (boot_cpu_has(X86_FEATURE_INTEL_PT) && > Queued, thanks. Paolo
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f6915f10e584..2634ee8c9dc8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6978,6 +6978,30 @@ static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu) } } +static bool guest_cpuid_has_pmu(struct kvm_vcpu *vcpu) +{ + struct kvm_cpuid_entry2 *entry; + union cpuid10_eax eax; + + entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); + if (!entry) + return false; + + eax.full = entry->eax; + return (eax.split.version_id > 0); +} + +static void nested_vmx_procbased_ctls_update(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + bool pmu_enabled = guest_cpuid_has_pmu(vcpu); + + if (pmu_enabled) + vmx->nested.msrs.procbased_ctls_high |= CPU_BASED_RDPMC_EXITING; + else + vmx->nested.msrs.procbased_ctls_high &= ~CPU_BASED_RDPMC_EXITING; +} + static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -7066,6 +7090,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) if (nested_vmx_allowed(vcpu)) { nested_vmx_cr_fixed1_bits_update(vcpu); nested_vmx_entry_exit_ctls_update(vcpu); + nested_vmx_procbased_ctls_update(vcpu); } if (boot_cpu_has(X86_FEATURE_INTEL_PT) &&