diff mbox

[2/2] KVM: Add VT-x machine check support

Message ID 20090604111205.C47C31D0282@basil.firstfloor.org (mailing list archive)
State New, archived
Headers show

Commit Message

Andi Kleen June 4, 2009, 11:12 a.m. UTC
[Avi could you please still consider this patch for your 2.6.31 patchqueue?
It's fairly simple, but important to handle memory errors in guests]

VT-x needs an explicit MC vector intercept to handle machine checks in the 
hyper visor.

It also has a special option to catch machine checks that happen
during VT entry.

Do these interceptions and forward them to the Linux machine check
handler. Make it always look like user space is interrupted because
the machine check handler treats kernel/user space differently.

Thanks to Huang Ying and Jiang Yunhong for help and testing.

Cc: ying.huang@intel.com
Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 arch/x86/include/asm/vmx.h |    1 +
 arch/x86/kvm/vmx.c         |   26 ++++++++++++++++++++++++--
 2 files changed, 25 insertions(+), 2 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Avi Kivity June 4, 2009, 11:48 a.m. UTC | #1
Andi Kleen wrote:
> [Avi could you please still consider this patch for your 2.6.31 patchqueue?
> It's fairly simple, but important to handle memory errors in guests]
>   

Oh yes, and it'll be needed for -stable.  IIUC, right now a machine 
check is trapped by the guest, so the guest is killed instead of the host?

> +/*
> + * Trigger machine check on the host. We assume all the MSRs are already set up
> + * by the CPU and that we still run on the same CPU as the MCE occurred on.
> + * We pass a fake environment to the machine check handler because we want
> + * the guest to be always treated like user space, no matter what context
> + * it used internally.
> + */
>   

This assumption is incorrect.  This code is executed after preemption 
has been enabled, and we may have even slept before reaching it.

NMI suffers from the same issue, see vmx_complete_interrupts().  You 
could handle it the same way.

> @@ -3150,6 +3171,7 @@
>  	[EXIT_REASON_WBINVD]                  = handle_wbinvd,
>  	[EXIT_REASON_TASK_SWITCH]             = handle_task_switch,
>  	[EXIT_REASON_EPT_VIOLATION]	      = handle_ept_violation,
> +	[EXIT_REASON_MACHINE_CHECK]	      = handle_machine_check,
>  };
>  
>  static const int kvm_vmx_max_exit_handlers =
>   

We get both an explicit EXIT_REASON and an exception?
Avi Kivity June 4, 2009, 12:49 p.m. UTC | #2
Andi Kleen wrote:
>> This assumption is incorrect.  This code is executed after preemption 
>> has been enabled, and we may have even slept before reaching it.
>>     
>
> The only thing that counts here is the context before the machine
> check event. If there was a vmexit we know it was in guest context.
>
> The only requirement we have is that we're running still on the same
> CPU. I assume that's true, otherwise the vmcb accesses wouldn't work?
>   

It's not true, we're in preemptible context and may have even slept.

vmcs access work because we have a preempt notifier called when we are 
scheduled in, and will execute vmclear/vmptrld as necessary.  Look at 
kvm_preempt_ops in virt/kvm_main.c.

>> We get both an explicit EXIT_REASON and an exception?
>>     
>
> These are different cases. The exception is #MC in guest context,
> the EXIT_REASON is when a #MC happens while the CPU is executing
> the VM entry microcode.
>   

I see, thanks.
Andi Kleen June 4, 2009, 12:51 p.m. UTC | #3
On Thu, Jun 04, 2009 at 02:48:17PM +0300, Avi Kivity wrote:
> Andi Kleen wrote:
> >[Avi could you please still consider this patch for your 2.6.31 patchqueue?
> >It's fairly simple, but important to handle memory errors in guests]
> >  
> 
> Oh yes, and it'll be needed for -stable.  IIUC, right now a machine 
> check is trapped by the guest, so the guest is killed instead of the host?

Yes the guest will receive int 18.

But it will not kill itmelf because the guest cannot access the machine check
MSRs, so it will not see any machine check. So it's kind of ignored,
which is pretty bad.

> 
> >+/*
> >+ * Trigger machine check on the host. We assume all the MSRs are already 
> >set up
> >+ * by the CPU and that we still run on the same CPU as the MCE occurred 
> >on.
> >+ * We pass a fake environment to the machine check handler because we want
> >+ * the guest to be always treated like user space, no matter what context
> >+ * it used internally.
> >+ */
> >  
> 
> This assumption is incorrect.  This code is executed after preemption 
> has been enabled, and we may have even slept before reaching it.

The only thing that counts here is the context before the machine
check event. If there was a vmexit we know it was in guest context.

The only requirement we have is that we're running still on the same
CPU. I assume that's true, otherwise the vmcb accesses wouldn't work?

> > 	[EXIT_REASON_EPT_VIOLATION]	      = handle_ept_violation,
> >+	[EXIT_REASON_MACHINE_CHECK]	      = handle_machine_check,
> > };
> > 
> > static const int kvm_vmx_max_exit_handlers =
> >  
> 
> We get both an explicit EXIT_REASON and an exception?

These are different cases. The exception is #MC in guest context,
the EXIT_REASON is when a #MC happens while the CPU is executing
the VM entry microcode.

-Andi
Andi Kleen June 4, 2009, 1:01 p.m. UTC | #4
On Thu, Jun 04, 2009 at 03:49:03PM +0300, Avi Kivity wrote:
> Andi Kleen wrote:
> >>This assumption is incorrect.  This code is executed after preemption 
> >>has been enabled, and we may have even slept before reaching it.
> >>    
> >
> >The only thing that counts here is the context before the machine
> >check event. If there was a vmexit we know it was in guest context.
> >
> >The only requirement we have is that we're running still on the same
> >CPU. I assume that's true, otherwise the vmcb accesses wouldn't work?
> >  
> 
> It's not true, we're in preemptible context and may have even slept.
> 
> vmcs access work because we have a preempt notifier called when we are 
> scheduled in, and will execute vmclear/vmptrld as necessary.  Look at 
> kvm_preempt_ops in virt/kvm_main.c.

I see. So we need to move that check earlier. Do you have a preference
where it should be?

-Andi
Avi Kivity June 4, 2009, 1:10 p.m. UTC | #5
Andi Kleen wrote:
>> vmcs access work because we have a preempt notifier called when we are 
>> scheduled in, and will execute vmclear/vmptrld as necessary.  Look at 
>> kvm_preempt_ops in virt/kvm_main.c.
>>     
>
> I see. So we need to move that check earlier. Do you have a preference
> where it should be?
>   

There's no good place as it breaks the nice exit handler table.  You 
could put it in vmx_complete_interrupts() next to NMI handling.
Andi Kleen June 4, 2009, 1:20 p.m. UTC | #6
On Thu, Jun 04, 2009 at 04:10:14PM +0300, Avi Kivity wrote:
> Andi Kleen wrote:
> >>vmcs access work because we have a preempt notifier called when we are 
> >>scheduled in, and will execute vmclear/vmptrld as necessary.  Look at 
> >>kvm_preempt_ops in virt/kvm_main.c.
> >>    
> >
> >I see. So we need to move that check earlier. Do you have a preference
> >where it should be?
> >  
> 
> There's no good place as it breaks the nice exit handler table.  You 
> could put it in vmx_complete_interrupts() next to NMI handling.

I think I came up with a easy cheesy but not too bad solution now that should 
work. It simply remembers the CPU in the vcpu structure and schedules back to 
it. That's fine for this purpose. 

Currently testing the patch.

-Andi
Avi Kivity June 4, 2009, 1:49 p.m. UTC | #7
Andi Kleen wrote:
>> There's no good place as it breaks the nice exit handler table.  You 
>> could put it in vmx_complete_interrupts() next to NMI handling.
>>     
>
> I think I came up with a easy cheesy but not too bad solution now that should 
> work. It simply remembers the CPU in the vcpu structure and schedules back to 
> it. That's fine for this purpose. 
>   

We might be able schedule back in a timely manner.  Why not hack 
vmx_complete_interrupts()?  You're still in the critical section so 
you're guaranteed no delays or surprises.
Andi Kleen June 4, 2009, 2:07 p.m. UTC | #8
On Thu, Jun 04, 2009 at 04:49:50PM +0300, Avi Kivity wrote:
> Andi Kleen wrote:
> >>There's no good place as it breaks the nice exit handler table.  You 
> >>could put it in vmx_complete_interrupts() next to NMI handling.
> >>    
> >
> >I think I came up with a easy cheesy but not too bad solution now that 
> >should work. It simply remembers the CPU in the vcpu structure and 
> >schedules back to it. That's fine for this purpose. 
> >  
> 
> We might be able schedule back in a timely manner.  Why not hack 
> vmx_complete_interrupts()?  You're still in the critical section so 
> you're guaranteed no delays or surprises.

Yes, have to do that. My original scheme was too risky because
the Machine checks have synchronization mechanisms now and 
preemption has no time limit.

I'll hack on it later today, hope fully have a patch tomorrow.

-Andi
diff mbox

Patch

Index: linux/arch/x86/include/asm/vmx.h
===================================================================
--- linux.orig/arch/x86/include/asm/vmx.h	2009-05-28 10:47:53.000000000 +0200
+++ linux/arch/x86/include/asm/vmx.h	2009-06-04 11:58:49.000000000 +0200
@@ -247,6 +247,7 @@ 
 #define EXIT_REASON_MSR_READ            31
 #define EXIT_REASON_MSR_WRITE           32
 #define EXIT_REASON_MWAIT_INSTRUCTION   36
+#define EXIT_REASON_MACHINE_CHECK	41
 #define EXIT_REASON_TPR_BELOW_THRESHOLD 43
 #define EXIT_REASON_APIC_ACCESS         44
 #define EXIT_REASON_EPT_VIOLATION       48
Index: linux/arch/x86/kvm/vmx.c
===================================================================
--- linux.orig/arch/x86/kvm/vmx.c	2009-05-28 10:47:53.000000000 +0200
+++ linux/arch/x86/kvm/vmx.c	2009-06-04 12:05:44.000000000 +0200
@@ -32,6 +32,7 @@ 
 #include <asm/desc.h>
 #include <asm/vmx.h>
 #include <asm/virtext.h>
+#include <asm/mce.h>
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
 
@@ -478,7 +479,7 @@ 
 {
 	u32 eb;
 
-	eb = (1u << PF_VECTOR) | (1u << UD_VECTOR);
+	eb = (1u << PF_VECTOR) | (1u << UD_VECTOR) | (1u << MC_VECTOR);
 	if (!vcpu->fpu_active)
 		eb |= 1u << NM_VECTOR;
 	if (vcpu->guest_debug & KVM_GUESTDBG_ENABLE) {
@@ -2585,6 +2586,23 @@ 
 	return 0;
 }
 
+/*
+ * Trigger machine check on the host. We assume all the MSRs are already set up
+ * by the CPU and that we still run on the same CPU as the MCE occurred on.
+ * We pass a fake environment to the machine check handler because we want
+ * the guest to be always treated like user space, no matter what context
+ * it used internally.
+ */
+static int handle_machine_check(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+{
+	struct pt_regs regs = {
+		.cs = 3, /* Fake ring 3 no matter what the guest ran on */
+		.flags = X86_EFLAGS_IF,
+	};
+	do_machine_check(&regs, 0);
+	return 1;
+}
+
 static int handle_exception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -2596,6 +2614,10 @@ 
 	vect_info = vmx->idt_vectoring_info;
 	intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
 
+	ex_no = intr_info & INTR_INFO_VECTOR_MASK;
+	if (ex_no == MCE_VECTOR)
+		return handle_machine_check(vcpu, kvm_run);
+
 	if ((vect_info & VECTORING_INFO_VALID_MASK) &&
 						!is_page_fault(intr_info))
 		printk(KERN_ERR "%s: unexpected, vectoring info 0x%x "
@@ -2648,7 +2670,6 @@ 
 		return 1;
 	}
 
-	ex_no = intr_info & INTR_INFO_VECTOR_MASK;
 	switch (ex_no) {
 	case DB_VECTOR:
 		dr6 = vmcs_readl(EXIT_QUALIFICATION);
@@ -3150,6 +3171,7 @@ 
 	[EXIT_REASON_WBINVD]                  = handle_wbinvd,
 	[EXIT_REASON_TASK_SWITCH]             = handle_task_switch,
 	[EXIT_REASON_EPT_VIOLATION]	      = handle_ept_violation,
+	[EXIT_REASON_MACHINE_CHECK]	      = handle_machine_check,
 };
 
 static const int kvm_vmx_max_exit_handlers =