diff mbox

KVM: nVMX: Add support for activity state HLT

Message ID 52A6C6B4.1020307@web.de (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Kiszka Dec. 10, 2013, 7:45 a.m. UTC
On 2013-12-06 13:49, Jan Kiszka wrote:
> On 2013-12-05 10:52, Paolo Bonzini wrote:
>> Il 04/12/2013 08:58, Jan Kiszka ha scritto:
>>> We can easily emulate the HLT activity state for L1: If it decides that
>>> L2 shall be halted on entry, just invoke the normal emulation of halt
>>> after switching to L2. We do not depend on specific host features to
>>> provide this, so we can expose the capability unconditionally.
>>>
>>> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
>>> ---
>>>
>>> Jailhouse would like to use this. Experimental code works fine so far,
>>> both on patched KVM and real HW.
>>
>> Nice. :)
>>
>> Do you have a testcase for kvm-unit-tests?
> 
> Not yet. Maybe I will find a little time these days.

Test are still ongoing, but it seems there are problems remaining with
halting in L2 in general, i.e. with unintercepted hlt. I'm currently
applying this to get beyond some hangups, but I'm still experiencing
some delayed IRQ delivery to L2:


Jan
diff mbox

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 31eb577..fad04ce 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4684,6 +4684,7 @@  static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
 			vmcs12->vm_exit_reason =
 				EXIT_REASON_EXTERNAL_INTERRUPT;
 			vmcs12->vm_exit_intr_info = 0;
+			kvm_make_request(KVM_REQ_UNHALT, vcpu);
 			/*
 			 * fall through to normal code, but now in L1, not L2
 			 */
@@ -8057,8 +8058,6 @@  static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 
 	enter_guest_mode(vcpu);
 
-	vmx->nested.nested_run_pending = 1;
-
 	vmx->nested.vmcs01_tsc_offset = vmcs_read64(TSC_OFFSET);
 
 	cpu = get_cpu();
@@ -8077,6 +8076,8 @@  static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 	if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT)
 		return kvm_emulate_halt(vcpu);
 
+	vmx->nested.nested_run_pending = 1;
+
 	/*
 	 * Note no nested_vmx_succeed or nested_vmx_fail here. At this point
 	 * we are no longer running L1, and VMLAUNCH/VMRESUME has not yet