From patchwork Fri Sep 18 13:00:28 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 48530 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n8ID0gJq022742 for ; Fri, 18 Sep 2009 13:00:43 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755935AbZIRNAe (ORCPT ); Fri, 18 Sep 2009 09:00:34 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755910AbZIRNAd (ORCPT ); Fri, 18 Sep 2009 09:00:33 -0400 Received: from cantor2.suse.de ([195.135.220.15]:49034 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751540AbZIRNAa (ORCPT ); Fri, 18 Sep 2009 09:00:30 -0400 Received: from relay2.suse.de (mail2.suse.de [195.135.221.8]) by mx2.suse.de (Postfix) with ESMTP id 3BD60867E2 for ; Fri, 18 Sep 2009 15:00:33 +0200 (CEST) From: Alexander Graf To: kvm@vger.kernel.org Subject: [PATCH 1/5] Implement #NMI exiting for nested SVM Date: Fri, 18 Sep 2009 15:00:28 +0200 Message-Id: <1253278832-31803-2-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.6.0.2 In-Reply-To: <1253278832-31803-1-git-send-email-agraf@suse.de> References: <1253278832-31803-1-git-send-email-agraf@suse.de> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When injecting an NMI to the l1 guest while it was running the l2 guest, we didn't #VMEXIT but just injected the NMI to the l2 guest. Let's be closer to real hardware and #VMEXIT if we're supposed to do so. Signed-off-by: Alexander Graf --- arch/x86/kvm/svm.c | 38 ++++++++++++++++++++++++++++++++------ 1 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 9a4daca..f12a669 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1375,6 +1375,21 @@ static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, return nested_svm_exit_handled(svm); } +static inline int nested_svm_nmi(struct vcpu_svm *svm) +{ + if (!is_nested(svm)) + return 0; + + svm->vmcb->control.exit_code = SVM_EXIT_NMI; + + if (nested_svm_exit_handled(svm)) { + nsvm_printk("VMexit -> NMI\n"); + return 1; + } + + return 0; +} + static inline int nested_svm_intr(struct vcpu_svm *svm) { if (!is_nested(svm)) @@ -2462,7 +2477,9 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; return !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) && - !(svm->vcpu.arch.hflags & HF_NMI_MASK); + !(svm->vcpu.arch.hflags & HF_NMI_MASK) && + gif_set(svm) && + !is_nested(svm); } static int svm_interrupt_allowed(struct kvm_vcpu *vcpu) @@ -2488,22 +2505,31 @@ static void enable_irq_window(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); nsvm_printk("Trying to open IRQ window\n"); - nested_svm_intr(svm); + if (nested_svm_intr(svm)) + return; /* In case GIF=0 we can't rely on the CPU to tell us when * GIF becomes 1, because that's a separate STGI/VMRUN intercept. * The next time we get that intercept, this function will be * called again though and we'll get the vintr intercept. */ - if (gif_set(svm)) { - svm_set_vintr(svm); - svm_inject_irq(svm, 0x0); - } + if (!gif_set(svm)) + return; + + svm_set_vintr(svm); + svm_inject_irq(svm, 0x0); } static void enable_nmi_window(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + if (nested_svm_nmi(svm)) + return; + + /* NMI is deferred until GIF == 1. Setting GIF will cause a #VMEXIT */ + if (!gif_set(svm)) + return; + if ((svm->vcpu.arch.hflags & (HF_NMI_MASK | HF_IRET_MASK)) == HF_NMI_MASK) return; /* IRET will cause a vm exit */