diff mbox

x86/kvm/vmx: Don't halt vcpu when L1 is injecting events to L2

Message ID 1518066816-7197-1-git-send-email-chao.gao@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Chao Gao Feb. 8, 2018, 5:13 a.m. UTC
Although L2 is in halt state, it will be in the active state after
VM entry if the VM entry is vectoring. Halting the vcpu here means
the event won't be injected to L2 and this decision isn't reported
to L1. Thus L0 drops an event that should be injected to L2.

Because virtual interrupt delivery may wake L2 vcpu, if VID is enabled,
do the same thing -- don't halt L2.

Signed-off-by: Chao Gao <chao.gao@intel.com>
---
 arch/x86/kvm/vmx.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

Comments

Paolo Bonzini Feb. 8, 2018, 10:29 a.m. UTC | #1
On 08/02/2018 06:13, Chao Gao wrote:
> Although L2 is in halt state, it will be in the active state after
> VM entry if the VM entry is vectoring. Halting the vcpu here means
> the event won't be injected to L2 and this decision isn't reported
> to L1. Thus L0 drops an event that should be injected to L2.
> 
> Because virtual interrupt delivery may wake L2 vcpu, if VID is enabled,
> do the same thing -- don't halt L2.

This second part seems wrong to me, or at least overly general.  Perhaps
you mean if RVI>0?

Thanks,

Paolo

> Signed-off-by: Chao Gao <chao.gao@intel.com>
> ---
>  arch/x86/kvm/vmx.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index bb5b488..e1fe4e4 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -10985,8 +10985,14 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
>  	if (ret)
>  		return ret;
>  
> -	if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT)
> -		return kvm_vcpu_halt(vcpu);
> +	if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) {
> +		u32 intr_info = vmcs_read32(VM_ENTRY_INTR_INFO_FIELD);
> +		u32 exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
> +
> +		if (!(intr_info & VECTORING_INFO_VALID_MASK) &&
> +			!(exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY))
> +			return kvm_vcpu_halt(vcpu);
> +	}

>  	vmx->nested.nested_run_pending = 1;
>  
>
diff mbox

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index bb5b488..e1fe4e4 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10985,8 +10985,14 @@  static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 	if (ret)
 		return ret;
 
-	if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT)
-		return kvm_vcpu_halt(vcpu);
+	if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) {
+		u32 intr_info = vmcs_read32(VM_ENTRY_INTR_INFO_FIELD);
+		u32 exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+
+		if (!(intr_info & VECTORING_INFO_VALID_MASK) &&
+			!(exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY))
+			return kvm_vcpu_halt(vcpu);
+	}
 
 	vmx->nested.nested_run_pending = 1;