From patchwork Sat Feb 16 17:10:14 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 2152241 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id D37E03FDF1 for ; Sat, 16 Feb 2013 17:10:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753662Ab3BPRKU (ORCPT ); Sat, 16 Feb 2013 12:10:20 -0500 Received: from mout.web.de ([212.227.17.11]:58568 "EHLO mout.web.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753559Ab3BPRKT (ORCPT ); Sat, 16 Feb 2013 12:10:19 -0500 Received: from mchn199C.mchp.siemens.de ([95.157.56.37]) by smtp.web.de (mrweb101) with ESMTPSA (Nemesis) id 0MNcV4-1TzaXC0zPO-006ff9; Sat, 16 Feb 2013 18:10:17 +0100 Message-ID: <511FBD76.8010307@web.de> Date: Sat, 16 Feb 2013 18:10:14 +0100 From: Jan Kiszka User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 MIME-Version: 1.0 To: Gleb Natapov , Marcelo Tosatti CC: kvm , Nadav Har'El , "Nakajima, Jun" Subject: [PATCH] KVM: nVMX: Fix direct injection of interrupts from L0 to L2 X-Enigmail-Version: 1.5 X-Provags-ID: V02:K0:Vcgq3Qcu/mQd1Fd5QlTIC2QACUnxKt0HkkcNH8lju3C 2vWmaoxMECwIKbPyk8d5XbTLmtHypxs9QHPpfsaAEajFnQ9JM5 +I+fuGSSSaUIL1lcycRHXiQUNKTxyNJkzEZWuF10H++4Ktob7M fQNN8cRxSVAl1eUbajvcV+FK5e1Zm4iYb4ksDfFynkTLSgKrWK BrOg/jwRfthPVmJlFkQWg== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jan Kiszka If L1 does not set PIN_BASED_EXT_INTR_MASK, we incorrectly skipped vmx_complete_interrupts on L2 exits. This is required because, with direct interrupt injection from L0 to L2, L0 has to update its pending events. Also, we need to allow vmx_cancel_injection when entering L2 in we left to L0. This condition is indirectly derived from the absence of valid vectoring info in vmcs12. We no explicitly clear it if we find out that the L2 exit is not targeting L1 but L0. Signed-off-by: Jan Kiszka --- arch/x86/kvm/vmx.c | 43 +++++++++++++++++++++++++++---------------- 1 files changed, 27 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 68a045ae..464b6a5 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -624,6 +624,7 @@ static void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); static bool guest_state_valid(struct kvm_vcpu *vcpu); static u32 vmx_segment_access_rights(struct kvm_segment *var); +static void vmx_complete_interrupts(struct vcpu_vmx *vmx); static DEFINE_PER_CPU(struct vmcs *, vmxarea); static DEFINE_PER_CPU(struct vmcs *, current_vmcs); @@ -6213,9 +6214,19 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu) else vmx->nested.nested_run_pending = 0; - if (is_guest_mode(vcpu) && nested_vmx_exit_handled(vcpu)) { - nested_vmx_vmexit(vcpu); - return 1; + if (is_guest_mode(vcpu)) { + if (nested_vmx_exit_handled(vcpu)) { + nested_vmx_vmexit(vcpu); + return 1; + } + /* + * Now it's clear, we are leaving to L0. Perform the postponed + * interrupt completion and clear L1's vectoring info field so + * that we do not overwrite what L0 wants to inject on + * re-entry. + */ + vmx_complete_interrupts(vmx); + get_vmcs12(vcpu)->idt_vectoring_info_field = 0; } if (exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) { @@ -6495,8 +6506,6 @@ static void __vmx_complete_interrupts(struct vcpu_vmx *vmx, static void vmx_complete_interrupts(struct vcpu_vmx *vmx) { - if (is_guest_mode(&vmx->vcpu)) - return; __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info, VM_EXIT_INSTRUCTION_LEN, IDT_VECTORING_ERROR_CODE); @@ -6504,7 +6513,9 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) static void vmx_cancel_injection(struct kvm_vcpu *vcpu) { - if (is_guest_mode(vcpu)) + if (is_guest_mode(vcpu) && + get_vmcs12(vcpu)->idt_vectoring_info_field & + VECTORING_INFO_VALID_MASK) return; __vmx_complete_interrupts(to_vmx(vcpu), vmcs_read32(VM_ENTRY_INTR_INFO_FIELD), @@ -6710,6 +6721,14 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); + vmx->loaded_vmcs->launched = 1; + + vmx->exit_reason = vmcs_read32(VM_EXIT_REASON); + trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX); + + vmx_complete_atomic_exit(vmx); + vmx_recover_nmi_blocking(vmx); + if (is_guest_mode(vcpu)) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); vmcs12->idt_vectoring_info_field = vmx->idt_vectoring_info; @@ -6719,16 +6738,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN); } - } - - vmx->loaded_vmcs->launched = 1; - - vmx->exit_reason = vmcs_read32(VM_EXIT_REASON); - trace_kvm_exit(vmx->exit_reason, vcpu, KVM_ISA_VMX); - - vmx_complete_atomic_exit(vmx); - vmx_recover_nmi_blocking(vmx); - vmx_complete_interrupts(vmx); + } else + vmx_complete_interrupts(vmx); } static void vmx_free_vcpu(struct kvm_vcpu *vcpu)