Message ID | 20170913130034.11046-1-sergey.dyasli@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com] > Sent: Wednesday, September 13, 2017 9:01 PM > > Under the following circumstances: > > 1. L1 doesn't enable PAUSE exiting or PAUSE-loop exiting controls > 2. L2 executes PAUSE in a loop with RFLAGS.IE == 0 > > L1's PV IPI through event channel will never reach the target L1's vCPU > which runs L2 because nvmx_intr_intercept() doesn't know about > hvm_intsrc_vector. This leads to infinite L2 loop without nested > vmexits and can cause L1 to hang. > > The issue is easily reproduced with Qemu/KVM on CentOS-7-1611 as L1 > and an L2 guest with SMP. > > Fix nvmx_intr_intercept() by injecting hvm_intsrc_vector irq into L1 > which will cause nested vmexit. > > Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index e1d0190ca9..4c0f1c8f71 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -188,13 +188,13 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack) if ( nestedhvm_vcpu_in_guestmode(v) ) { + ctrl = get_vvmcs(v, PIN_BASED_VM_EXEC_CONTROL); + if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) ) + return 0; + if ( intack.source == hvm_intsrc_pic || intack.source == hvm_intsrc_lapic ) { - ctrl = get_vvmcs(v, PIN_BASED_VM_EXEC_CONTROL); - if ( !(ctrl & PIN_BASED_EXT_INTR_MASK) ) - return 0; - vmx_inject_extint(intack.vector, intack.source); ctrl = get_vvmcs(v, VM_EXIT_CONTROLS); @@ -213,6 +213,11 @@ static int nvmx_intr_intercept(struct vcpu *v, struct hvm_intack intack) return 1; } + else if ( intack.source == hvm_intsrc_vector ) + { + vmx_inject_extint(intack.vector, intack.source); + return 1; + } } return 0;
Under the following circumstances: 1. L1 doesn't enable PAUSE exiting or PAUSE-loop exiting controls 2. L2 executes PAUSE in a loop with RFLAGS.IE == 0 L1's PV IPI through event channel will never reach the target L1's vCPU which runs L2 because nvmx_intr_intercept() doesn't know about hvm_intsrc_vector. This leads to infinite L2 loop without nested vmexits and can cause L1 to hang. The issue is easily reproduced with Qemu/KVM on CentOS-7-1611 as L1 and an L2 guest with SMP. Fix nvmx_intr_intercept() by injecting hvm_intsrc_vector irq into L1 which will cause nested vmexit. Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com> --- xen/arch/x86/hvm/vmx/intr.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)