diff mbox

x86/vpmu_intel: Fix hypervisor crash by catching wrmsr fault

Message ID 20170420174942.12913-2-mohit.gambhir@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Mohit Gambhir April 20, 2017, 5:49 p.m. UTC
This patch changes wrmsrl() calls to write to MSR_P6_EVTSEL register in the
VPMU to wrmsr_safe(). There are known (and possibly some unknown) cases where
setting certain bits in MSR_P6_EVTSEL reg. can cause a General Protection
fault on some machines. Unless we catch this fault when it happens, it will
result in a hypervisor crash.

For instance, setting Pin Control (PC) bit (19) in MSR_P6_EVNTSEL results
in a General Protection Fault on Broadwell machines and thus causes the
hypervisor to crash.

This patch fixes the above mentioned crash (and other possible
hypervisor crashes that may occur while writing MSR_P6_EVNTSEL reg) by
catching and returning the fault to the guest OS.

Signed-off-by: Mohit Gambhir <mohit.gambhir@oracle.com>
---
 xen/arch/x86/cpu/vpmu_intel.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

Comments

Jan Beulich April 21, 2017, 7:14 a.m. UTC | #1
>>> On 20.04.17 at 19:49, <mohit.gambhir@oracle.com> wrote:
> This patch changes wrmsrl() calls to write to MSR_P6_EVTSEL register in the
> VPMU to wrmsr_safe(). There are known (and possibly some unknown) cases where
> setting certain bits in MSR_P6_EVTSEL reg. can cause a General Protection
> fault on some machines. Unless we catch this fault when it happens, it will
> result in a hypervisor crash.
> 
> For instance, setting Pin Control (PC) bit (19) in MSR_P6_EVNTSEL results
> in a General Protection Fault on Broadwell machines and thus causes the
> hypervisor to crash.

I can't seem to find indication of this in the SDM. Is this removal of a
valid bit from an architectural MSR documented anywhere?

> @@ -356,7 +356,9 @@ static inline void __core2_vpmu_load(struct vcpu *v)
>      for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
>          wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
> -        wrmsrl(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control);
> +        if ( wrmsr_safe(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control) ==
> +                -EFAULT)
> +            return -EFAULT;

If the function already returns an error code, please use it to avoid
a latent bug when in the future it may also return other than -EFAULT.

> @@ -369,6 +371,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
>          core2_vpmu_cxt->global_ovf_ctrl = 0;
>          wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
>      }
> +    return 0;

Blank line above the main return statement of a function please.

> @@ -461,9 +464,8 @@ static int core2_vpmu_load(struct vcpu *v, bool_t 
> from_guest)
>  
>      vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
>  
> -    __core2_vpmu_load(v);
> +    return __core2_vpmu_load(v);
>  
> -    return 0;
>  }

Please also remove the now stray blank line.

> @@ -538,7 +540,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
>      /* Do the lazy load staff. */
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
>      {
> -        __core2_vpmu_load(current);
> +        if ( __core2_vpmu_load(current) )
> +            return 0;

Nothing you need to change, but the change here point out that
the function really needs switching to returning bool.

> @@ -719,8 +722,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
>          }
>      }
>  
> -    if ( type != MSR_TYPE_GLOBAL )
> -        wrmsrl(msr, msr_content);
> +    if ( type != MSR_TYPE_GLOBAL)
> +    {
> +        if ( wrmsr_safe(msr, msr_content) == -EFAULT )
> +            return -EFAULT;

See the earlier similar comment.

Further a more general question: Why do you do this only for
MSR_P6_EVNTSEL*, but not consistently for all MSRs which are
being written with (partially) guest controlled values? I'd namely
expect all wrmsrl() to be switched over, unless it is clear that
we're dealing with an erratum here and we therefore can
assume that normal guarantees for architectural MSRs apply to
everything else.

And finally, in the description you talk about forwarding the
fault - I can see this being the case for core2_vpmu_do_wrmsr(),
but how does this match up with __core2_vpmu_load() and its
callers? Namely you may report an error there after already
having updated some MSRs. Leaving the overall set of MSRs in
an inconsistent state is not very likely to be a good idea.

Jan
Mohit Gambhir April 24, 2017, 3:44 p.m. UTC | #2
On 04/21/2017 03:14 AM, Jan Beulich wrote:
>>>> On 20.04.17 at 19:49,<mohit.gambhir@oracle.com>  wrote:
>> This patch changes wrmsrl() calls to write to MSR_P6_EVTSEL register in the
>> VPMU to wrmsr_safe(). There are known (and possibly some unknown) cases where
>> setting certain bits in MSR_P6_EVTSEL reg. can cause a General Protection
>> fault on some machines. Unless we catch this fault when it happens, it will
>> result in a hypervisor crash.
>>
>> For instance, setting Pin Control (PC) bit (19) in MSR_P6_EVNTSEL results
>> in a General Protection Fault on Broadwell machines and thus causes the
>> hypervisor to crash.
> I can't seem to find indication of this in the SDM. Is this removal of a
> valid bit from an architectural MSR documented anywhere?
It see it as an observed hardware bug so I don't expect it to be found 
in the SDM.  I reached out to
the folks at Intel (CC) but they are unaware of it. Ideally it should 
work the way it says in the SDM
for all events and on all machines but it doesn't - not for the all the 
events and certainly not on all
machines.

Also, from the description in the SDM, PC flag bit it seems very 
disruptive to me.
SDM says that if the bit is set then the processor toggles the PMi pin 
(generating a performance monitoring interrupt?)
every time the event occurs. So if we program the counter to count 
"unhaulted core cycles", and set PC flag bit
it will generate an interrupts every cycle!?

>> @@ -356,7 +356,9 @@ static inline void __core2_vpmu_load(struct vcpu *v)
>>       for ( i = 0; i < arch_pmc_cnt; i++ )
>>       {
>>           wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
>> -        wrmsrl(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control);
>> +        if ( wrmsr_safe(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control) ==
>> +                -EFAULT)
>> +            return -EFAULT;
> If the function already returns an error code, please use it to avoid
> a latent bug when in the future it may also return other than -EFAULT.
Fixed in next version.
>> @@ -369,6 +371,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
>>           core2_vpmu_cxt->global_ovf_ctrl = 0;
>>           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
>>       }
>> +    return 0;
> Blank line above the main return statement of a function please.
> pv_inject_hw_exception(TRAP_gp_fault
Fixed in next version.
>> @@ -461,9 +464,8 @@ static int core2_vpmu_load(struct vcpu *v, bool_t
>> from_guest)
>>   
>>       vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
>>   
>> -    __core2_vpmu_load(v);
>> +    return __core2_vpmu_load(v);
>>   
>> -    return 0;
>>   }
> Please also remove the now stray blank line.
>
Fixed in next version.
>> @@ -538,7 +540,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
>>       /* Do the lazy load staff. */
>>       if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
>>       {
>> -        __core2_vpmu_load(current);
>> +        if ( __core2_vpmu_load(current) )
>> +            return 0;
> Nothing you need to change, but the change here point out that
> the function really needs switching to returning bool.
Agreed.
>> @@ -719,8 +722,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
>>           }
>>       }
>>   
>> -    if ( type != MSR_TYPE_GLOBAL )
>> -        wrmsrl(msr, msr_content);
>> +    if ( type != MSR_TYPE_GLOBAL)
>> +    {
>> +        if ( wrmsr_safe(msr, msr_content) == -EFAULT )
>> +            return -EFAULT;
> See the earlier similar comment.
Fixed in next verison.
> Further a more general question: Why do you do this only for
> MSR_P6_EVNTSEL*, but not consistently for all MSRs which are
> being written with (partially) guest controlled values? I'd namely
> expect all wrmsrl() to be switched over, unless it is clear that
> we're dealing with an erratum here and we therefore can
> assume that normal guarantees for architectural MSRs apply to
> everything else.
I think it is an erratum for one specific MSR which is MSR_P6_EVTSEL
> And finally, in the description you talk about forwarding the
> fault - I can see this being the case for core2_vpmu_do_wrmsr(),
> but how does this match up with __core2_vpmu_load() and its
> callers? Namely you may report an error there after already
> having updated some MSRs. Leaving the overall set of MSRs in
> an inconsistent state is not very likely to be a good idea.
Yes, that's something I did not think about and needs to be fixed. There 
are two issues here -

1. What should we set MSR values in case of an error? We can not revert 
them because if it is part of a
context switch  then we do not want to leak the values from the previous 
guest to the new guest.
So one possible solution is to  wipe all PMU MSRs (set them to 0) before 
returning the error.

2. __core2_vpmu_load is called in two scenarios - One from a hyper call 
XENPMU_FLUSH from do xenpmu_op()
and the other during a context switch from vpmu_switch_to(). The 
hypercall seems to be handling the error
correctly but vpmu_switch_to() ignores the error. So to propagate the 
error up to the guest kernel maybe we should
inject the fault into the guest using 
hvm_inject_hw_exception(TRAP_gp_fault, error)/ 
pv_inject_hw_exception(TRAP_gp_fault, error)?

> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
Jan Beulich April 24, 2017, 4:04 p.m. UTC | #3
>>> On 24.04.17 at 17:44, <mohit.gambhir@oracle.com> wrote:
> On 04/21/2017 03:14 AM, Jan Beulich wrote:
>>>>> On 20.04.17 at 19:49,<mohit.gambhir@oracle.com>  wrote:
>>> This patch changes wrmsrl() calls to write to MSR_P6_EVTSEL register in the
>>> VPMU to wrmsr_safe(). There are known (and possibly some unknown) cases where
>>> setting certain bits in MSR_P6_EVTSEL reg. can cause a General Protection
>>> fault on some machines. Unless we catch this fault when it happens, it will
>>> result in a hypervisor crash.
>>>
>>> For instance, setting Pin Control (PC) bit (19) in MSR_P6_EVNTSEL results
>>> in a General Protection Fault on Broadwell machines and thus causes the
>>> hypervisor to crash.
>> I can't seem to find indication of this in the SDM. Is this removal of a
>> valid bit from an architectural MSR documented anywhere?
> It see it as an observed hardware bug so I don't expect it to be found 
> in the SDM.  I reached out to
> the folks at Intel (CC) but they are unaware of it. Ideally it should 
> work the way it says in the SDM
> for all events and on all machines but it doesn't - not for the all the 
> events and certainly not on all
> machines.

Please have the description make clear this is an (assumed)
erratum, until Intel either clarifies or adds an entry to the spec
update for the affected model(s).

>> Further a more general question: Why do you do this only for
>> MSR_P6_EVNTSEL*, but not consistently for all MSRs which are
>> being written with (partially) guest controlled values? I'd namely
>> expect all wrmsrl() to be switched over, unless it is clear that
>> we're dealing with an erratum here and we therefore can
>> assume that normal guarantees for architectural MSRs apply to
>> everything else.
> I think it is an erratum for one specific MSR which is MSR_P6_EVTSEL
>> And finally, in the description you talk about forwarding the
>> fault - I can see this being the case for core2_vpmu_do_wrmsr(),
>> but how does this match up with __core2_vpmu_load() and its
>> callers? Namely you may report an error there after already
>> having updated some MSRs. Leaving the overall set of MSRs in
>> an inconsistent state is not very likely to be a good idea.
> Yes, that's something I did not think about and needs to be fixed. There 
> are two issues here -
> 
> 1. What should we set MSR values in case of an error? We can not revert 
> them because if it is part of a
> context switch  then we do not want to leak the values from the previous 
> guest to the new guest.
> So one possible solution is to  wipe all PMU MSRs (set them to 0) before 
> returning the error.
> 
> 2. __core2_vpmu_load is called in two scenarios - One from a hyper call 
> XENPMU_FLUSH from do xenpmu_op()
> and the other during a context switch from vpmu_switch_to(). The 
> hypercall seems to be handling the error
> correctly but vpmu_switch_to() ignores the error. So to propagate the 
> error up to the guest kernel maybe we should
> inject the fault into the guest using 
> hvm_inject_hw_exception(TRAP_gp_fault, error)/ 
> pv_inject_hw_exception(TRAP_gp_fault, error)?

You can't arbitrarily inject an exception out of the context switch
code. I'm not sure you can sensibly continue the guest in this
case, unless - as you say - there is a way to place the PMU into a
consistent state (and you provide _some_kind of indication to the
guest about the vPMU state no longer matching its expectations).

Jan
diff mbox

Patch

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 3f0322c..13808b5 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -338,7 +338,7 @@  static int core2_vpmu_save(struct vcpu *v, bool_t to_guest)
     return 1;
 }
 
-static inline void __core2_vpmu_load(struct vcpu *v)
+static inline int __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
     struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
@@ -356,7 +356,9 @@  static inline void __core2_vpmu_load(struct vcpu *v)
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control);
+        if ( wrmsr_safe(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control) ==
+                -EFAULT)
+            return -EFAULT;
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -369,6 +371,7 @@  static inline void __core2_vpmu_load(struct vcpu *v)
         core2_vpmu_cxt->global_ovf_ctrl = 0;
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
     }
+    return 0;
 }
 
 static int core2_vpmu_verify(struct vcpu *v)
@@ -461,9 +464,8 @@  static int core2_vpmu_load(struct vcpu *v, bool_t from_guest)
 
     vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
 
-    __core2_vpmu_load(v);
+    return __core2_vpmu_load(v);
 
-    return 0;
 }
 
 static int core2_vpmu_alloc_resource(struct vcpu *v)
@@ -538,7 +540,8 @@  static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
     {
-        __core2_vpmu_load(current);
+        if ( __core2_vpmu_load(current) )
+            return 0;
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
         if ( is_hvm_vcpu(current) &&
              cpu_has_vmx_msr_bitmap )
@@ -719,8 +722,11 @@  static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
         }
     }
 
-    if ( type != MSR_TYPE_GLOBAL )
-        wrmsrl(msr, msr_content);
+    if ( type != MSR_TYPE_GLOBAL)
+    {
+        if ( wrmsr_safe(msr, msr_content) == -EFAULT )
+            return -EFAULT;
+    }
     else
     {
         if ( is_hvm_vcpu(v) )