From patchwork Fri Apr 24 17:23:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E052481 for ; Fri, 24 Apr 2020 17:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C69EA20857 for ; Fri, 24 Apr 2020 17:24:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h334Ucv4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728708AbgDXRYg (ORCPT ); Fri, 24 Apr 2020 13:24:36 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:37106 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726849AbgDXRYf (ORCPT ); Fri, 24 Apr 2020 13:24:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749074; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=urUA8szmOjF8TY/rJiBW3e0Wg6p5Hv2oHLXiGg8Oq0o=; b=h334Ucv46X/K2z4bc0zmZfOJ6U7TUFMM2SuRLQ/HdHo+BrlAvqj4UMesELGwewP0CaoQrW dCY/A+p0XZNdqlumU7i1LixhGGYsRM/Mf8wEhYT1iBrZ8zvKtub0BubzFGO573a4bWMYkq 9xAAbS4HsdEe8GJGU73fzUECvzw3hNM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-300-0mgY1dxVNxO22Tior9hslg-1; Fri, 24 Apr 2020 13:24:33 -0400 X-MC-Unique: 0mgY1dxVNxO22Tior9hslg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AA8371800D42; Fri, 24 Apr 2020 17:24:19 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id C1E651FDE1; Fri, 24 Apr 2020 17:24:18 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 01/22] KVM: SVM: introduce nested_run_pending Date: Fri, 24 Apr 2020 13:23:55 -0400 Message-Id: <20200424172416.243870-2-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We want to inject vmexits immediately from svm_check_nested_events, so that the interrupt/NMI window requests happen in inject_pending_event right after it returns. This however has the same issue as in vmx_check_nested_events, so introduce a nested_run_pending flag with the exact same purpose of delaying vmexit injection after the vmentry. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 3 ++- arch/x86/kvm/svm/svm.c | 1 + arch/x86/kvm/svm/svm.h | 4 ++++ 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index a7c3b3030e59..51cfab68428d 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -413,6 +413,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) copy_vmcb_control_area(hsave, vmcb); + svm->nested.nested_run_pending = 1; enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, &map); if (!nested_svm_vmrun_msrpm(svm)) { @@ -792,7 +793,8 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); bool block_nested_events = - kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required; + kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required || + svm->nested.nested_run_pending; if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) { if (block_nested_events) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c86f7278509b..77440b5953e3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3417,6 +3417,7 @@ static enum exit_fastpath_completion svm_vcpu_run(struct kvm_vcpu *vcpu) sync_cr8_to_lapic(vcpu); svm->next_rip = 0; + svm->nested.nested_run_pending = 0; svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 98c2890d561d..435f3328c99c 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -97,6 +97,10 @@ struct nested_state { /* A VMEXIT is required but not yet emulated */ bool exit_required; + /* A VMRUN has started but has not yet been performed, so + * we cannot inject a nested vmexit yet. */ + bool nested_run_pending; + /* cache for intercepts of the guest */ u32 intercept_cr; u32 intercept_dr; From patchwork Fri Apr 24 17:23:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508697 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C46C014B4 for ; Fri, 24 Apr 2020 17:25:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC91520704 for ; Fri, 24 Apr 2020 17:25:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AuKC80ju" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728110AbgDXRYf (ORCPT ); Fri, 24 Apr 2020 13:24:35 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:49744 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727108AbgDXRYf (ORCPT ); Fri, 24 Apr 2020 13:24:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749074; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=X97kAjQTRE8bXr/jYZHHtAlVVl/dgOHFGA/rfX34Hrg=; b=AuKC80jujjCWoSIgs8rVvQVsANFvUEjYVj8agd+E9XaioG6I8ck9vQsaUYLVCndcpHKx1m M5XDwigbpWB9qeOgU0L9Q8XzFnu9AR49QKCG+GmuzLJhpFGNu6eCjtrsA+qrxJ0viZ02m8 uqkHOUjjfFnv5Rxa9Ubz7uQ8oYrif0U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-63-nv8tM_YhPBWV5WT-sDK--g-1; Fri, 24 Apr 2020 13:24:32 -0400 X-MC-Unique: nv8tM_YhPBWV5WT-sDK--g-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B73D353; Fri, 24 Apr 2020 17:24:20 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id CEAA525277; Fri, 24 Apr 2020 17:24:19 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 02/22] KVM: SVM: leave halted state on vmexit Date: Fri, 24 Apr 2020 13:23:56 -0400 Message-Id: <20200424172416.243870-3-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Similar to VMX, we need to leave the halted state when performing a vmexit. Failure to do so will cause a hang after vmexit. Signed-off-by: Paolo Bonzini Reviewed-by: Oliver Upton --- arch/x86/kvm/svm/nested.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 51cfab68428d..e69e60ac1370 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -472,6 +472,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm) leave_guest_mode(&svm->vcpu); svm->nested.vmcb = 0; + /* in case we halted in L2 */ + svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE; + /* Give the current vmcb to the guest */ disable_gif(svm); From patchwork Fri Apr 24 17:23:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508695 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6661A81 for ; Fri, 24 Apr 2020 17:25:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E7BC21556 for ; Fri, 24 Apr 2020 17:25:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="jEbFbFoP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728818AbgDXRZJ (ORCPT ); Fri, 24 Apr 2020 13:25:09 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:41962 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728738AbgDXRYi (ORCPT ); Fri, 24 Apr 2020 13:24:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749077; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=ZeuFZxfdtAVz6owO1R3lYelqzSgqD4fbC4lAl+rqvQU=; b=jEbFbFoPh+oJQ/meM+zR+Rtx+UP879/qLjCwreEGyB+KhKBj0RP5DHwCEoCcYjYaItl35b mbEjM/t1HiFTxS4VhSX1qhqrPh/D0YsF9hl+b7/FF5GKeTRgi2BNVEGjJDIZBxpur9QIma 8iklfqs7bb9PK1S2+izwdFV8vPUnvo8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-316-KJO5IrLDOgupuVnkAwNMkA-1; Fri, 24 Apr 2020 13:24:34 -0400 X-MC-Unique: KJO5IrLDOgupuVnkAwNMkA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C4B1D8BE489; Fri, 24 Apr 2020 17:24:21 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id DBCFD1FDE1; Fri, 24 Apr 2020 17:24:20 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 03/22] KVM: SVM: immediately inject INTR vmexit Date: Fri, 24 Apr 2020 13:23:57 -0400 Message-Id: <20200424172416.243870-4-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We can immediately leave SVM guest mode in svm_check_nested_events now that we have the nested_run_pending mechanism. This makes things easier because we can run the rest of inject_pending_event with GIF=0, and KVM will naturally end up requesting the next interrupt window. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e69e60ac1370..266fde240493 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -778,13 +778,13 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, static void nested_svm_intr(struct vcpu_svm *svm) { + trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip); + svm->vmcb->control.exit_code = SVM_EXIT_INTR; svm->vmcb->control.exit_info_1 = 0; svm->vmcb->control.exit_info_2 = 0; - /* nested_svm_vmexit this gets called afterwards from handle_exit */ - svm->nested.exit_required = true; - trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip); + nested_svm_vmexit(svm); } static bool nested_exit_on_intr(struct vcpu_svm *svm) From patchwork Fri Apr 24 17:23:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508693 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54E6E14B4 for ; Fri, 24 Apr 2020 17:25:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B3C320736 for ; Fri, 24 Apr 2020 17:25:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="f3Phw2Pj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729006AbgDXRZH (ORCPT ); Fri, 24 Apr 2020 13:25:07 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:26526 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728818AbgDXRYj (ORCPT ); Fri, 24 Apr 2020 13:24:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749078; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=sbDmvfNcgt+2bJjYAeYhcGP2QN7WrEsbBIbocEGJVpE=; b=f3Phw2PjBpKje/S7W5FI9eZZLLMmFHe2PPGojSdMOkrECi6k6LxpH7aWitaN4GrRiqyFN7 w+xzeSRENUc5vlWzMln9LF4X7VLRWQ8K5S5NTvv/11fdmGnGFpzDxfxAC2smLvl1h5emP1 jb21arUdm2MJ2XTWTavy84ExxJBUQ0c= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-348-kUGgYStOOqaGGw-_lhTdTA-1; Fri, 24 Apr 2020 13:24:35 -0400 X-MC-Unique: kUGgYStOOqaGGw-_lhTdTA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D30C38BE4B9; Fri, 24 Apr 2020 17:24:22 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id E99851FDE1; Fri, 24 Apr 2020 17:24:21 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 04/22] KVM: SVM: Implement check_nested_events for NMI Date: Fri, 24 Apr 2020 13:23:58 -0400 Message-Id: <20200424172416.243870-5-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Cathy Avery Migrate nested guest NMI intercept processing to new check_nested_events. Signed-off-by: Cathy Avery Message-Id: <20200414201107.22952-2-cavery@redhat.com> [Reorder clauses as NMIs have higher priority than IRQs; inject immediate vmexit as is now done for IRQ vmexits. - Paolo] Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 6 ++---- arch/x86/kvm/svm/svm.h | 15 --------------- 3 files changed, 23 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 266fde240493..c3650efd2e89 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -776,6 +776,20 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, return vmexit; } +static bool nested_exit_on_nmi(struct vcpu_svm *svm) +{ + return (svm->nested.intercept & (1ULL << INTERCEPT_NMI)); +} + +static void nested_svm_nmi(struct vcpu_svm *svm) +{ + svm->vmcb->control.exit_code = SVM_EXIT_NMI; + svm->vmcb->control.exit_info_1 = 0; + svm->vmcb->control.exit_info_2 = 0; + + nested_svm_vmexit(svm); +} + static void nested_svm_intr(struct vcpu_svm *svm) { trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip); @@ -798,6 +812,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required || svm->nested.nested_run_pending; + if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) { + if (block_nested_events) + return -EBUSY; + nested_svm_nmi(svm); + return 0; + } + if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) { if (block_nested_events) return -EBUSY; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 77440b5953e3..8e732eb0b5c9 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3067,9 +3067,10 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; int ret; + ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) && !(svm->vcpu.arch.hflags & HF_NMI_MASK); - ret = ret && gif_set(svm) && nested_svm_nmi(svm); + ret = ret && gif_set(svm); return ret; } @@ -3147,9 +3148,6 @@ static void enable_nmi_window(struct kvm_vcpu *vcpu) return; /* STGI will cause a vm exit */ } - if (svm->nested.exit_required) - return; /* we're not going to run the guest yet */ - /* * Something prevents NMI from been injected. Single step over possible * problem (IRET or exception injection or interrupt shadow) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 435f3328c99c..a2bc33aadb67 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -373,21 +373,6 @@ void disable_nmi_singlestep(struct vcpu_svm *svm); #define NESTED_EXIT_DONE 1 /* Exit caused nested vmexit */ #define NESTED_EXIT_CONTINUE 2 /* Further checks needed */ -/* This function returns true if it is save to enable the nmi window */ -static inline bool nested_svm_nmi(struct vcpu_svm *svm) -{ - if (!is_guest_mode(&svm->vcpu)) - return true; - - if (!(svm->nested.intercept & (1ULL << INTERCEPT_NMI))) - return true; - - svm->vmcb->control.exit_code = SVM_EXIT_NMI; - svm->nested.exit_required = true; - - return false; -} - static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu) { return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK); From patchwork Fri Apr 24 17:23:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508687 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F408E15AB for ; Fri, 24 Apr 2020 17:24:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB12420704 for ; Fri, 24 Apr 2020 17:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dKqGg5xc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728903AbgDXRYw (ORCPT ); Fri, 24 Apr 2020 13:24:52 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:49833 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728853AbgDXRYm (ORCPT ); Fri, 24 Apr 2020 13:24:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749080; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=UmAA9/+IHhfnv934e38JcU8oC7FPZN3AvIwnCiympQ4=; b=dKqGg5xc+tzz2xbR6UPfYrB0D1sc/4uT4JvPeM5xJdzdQW9pPvh/6rq2v4xm/RhCaGve+a E7izftbADIfK33sos92KxnSzK5suBYWI/32h6DrMm+v8H0w4fK8G6AaRb/v+cJQWOrtHyQ hsjfonBj8kRA8COPSGWFXnXwxEn5zJM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-271-4DmxmMS2NQWsZlywIa1Pcg-1; Fri, 24 Apr 2020 13:24:34 -0400 X-MC-Unique: 4DmxmMS2NQWsZlywIa1Pcg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0383F18B641F; Fri, 24 Apr 2020 17:24:24 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0383625277; Fri, 24 Apr 2020 17:24:22 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson , Peter Shier Subject: [PATCH v2 05/22] KVM: nVMX: Preserve exception priority irrespective of exiting behavior Date: Fri, 24 Apr 2020 13:23:59 -0400 Message-Id: <20200424172416.243870-6-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Short circuit vmx_check_nested_events() if an exception is pending and needs to be injected into L2, priority between coincident events is not dependent on exiting behavior. This fixes a bug where a single-step #DB that is not intercepted by L1 is incorrectly dropped due to servicing a VMX Preemption Timer VM-Exit. Injected exceptions also need to be blocked if nested VM-Enter is pending or an exception was already injected, otherwise injecting the exception could overwrite an existing event injection from L1. Technically, this scenario should be impossible, i.e. KVM shouldn't inject its own exception during nested VM-Enter. This will be addressed in a future patch. Note, event priority between SMI, NMI and INTR is incorrect for L2, e.g. SMI should take priority over VM-Exit on NMI/INTR, and NMI that is injected into L2 should take priority over VM-Exit INTR. This will also be addressed in a future patch. Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs") Reported-by: Jim Mattson Cc: Oliver Upton Cc: Peter Shier Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/nested.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b516c24494e3..490dba7d0504 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3716,11 +3716,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) /* * Process any exceptions that are not debug traps before MTF. */ - if (vcpu->arch.exception.pending && - !vmx_pending_dbg_trap(vcpu) && - nested_vmx_check_exception(vcpu, &exit_qual)) { + if (vcpu->arch.exception.pending && !vmx_pending_dbg_trap(vcpu)) { if (block_nested_events) return -EBUSY; + if (!nested_vmx_check_exception(vcpu, &exit_qual)) + goto no_vmexit; nested_vmx_inject_exception_vmexit(vcpu, exit_qual); return 0; } @@ -3733,10 +3733,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (vcpu->arch.exception.pending && - nested_vmx_check_exception(vcpu, &exit_qual)) { + if (vcpu->arch.exception.pending) { if (block_nested_events) return -EBUSY; + if (!nested_vmx_check_exception(vcpu, &exit_qual)) + goto no_vmexit; nested_vmx_inject_exception_vmexit(vcpu, exit_qual); return 0; } @@ -3771,6 +3772,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } +no_vmexit: vmx_complete_nested_posted_interrupt(vcpu); return 0; } From patchwork Fri Apr 24 17:24:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508691 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B974F81 for ; Fri, 24 Apr 2020 17:25:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A174720704 for ; Fri, 24 Apr 2020 17:25:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TkdLU4+p" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728982AbgDXRZF (ORCPT ); Fri, 24 Apr 2020 13:25:05 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:22374 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728798AbgDXRYj (ORCPT ); Fri, 24 Apr 2020 13:24:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749077; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=n/v+GecTrEwyyLbsd1WvGrx2OwR9+bmxfRFtapBWds8=; b=TkdLU4+pWni3AIYe4nnWlFnYwYk5janErv3XLIahTJH6T7ssQEw6logsSwGCF5T/w8Qu/R iGlkIP5xI+w9nqScfLhUQjQk2H0yL1B4q9/5IeWJ0R0KdEDqQZQ3oDCgyU7A/HBj22Zrxo 8ShxytSTN5wfWfhVGOrNhVIwVTVirLc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-401-V_LEgpmoPRuvE8EGmhtFdQ-1; Fri, 24 Apr 2020 13:24:33 -0400 X-MC-Unique: V_LEgpmoPRuvE8EGmhtFdQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 28603A73A7; Fri, 24 Apr 2020 17:24:25 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 280D91FDE1; Fri, 24 Apr 2020 17:24:24 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson , Peter Shier Subject: [PATCH v2 06/22] KVM: nVMX: Open a window for pending nested VMX preemption timer Date: Fri, 24 Apr 2020 13:24:00 -0400 Message-Id: <20200424172416.243870-7-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Add a kvm_x86_ops hook to detect a nested pending "hypervisor timer" and use it to effectively open a window for servicing the expired timer. Like pending SMIs on VMX, opening a window simply means requesting an immediate exit. This fixes a bug where an expired VMX preemption timer (for L2) will be delayed and/or lost if a pending exception is injected into L2. The pending exception is rightly prioritized by vmx_check_nested_events() and injected into L2, with the preemption timer left pending. Because no window opened, L2 is free to run uninterrupted. Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer") Reported-by: Jim Mattson Cc: Oliver Upton Cc: Peter Shier Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-3-sean.j.christopherson@intel.com> [Check it in kvm_vcpu_has_events too, to ensure that the preemption timer is serviced promptly even if the vCPU is halted and L1 is not intercepting HLT. - Paolo] Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/vmx/nested.c | 10 ++++++++-- arch/x86/kvm/x86.c | 9 +++++++++ 3 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a239a297be33..372e6ea4af32 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1257,6 +1257,7 @@ struct kvm_x86_ops { struct kvm_x86_nested_ops { int (*check_events)(struct kvm_vcpu *vcpu); + bool (*hv_timer_pending)(struct kvm_vcpu *vcpu); int (*get_state)(struct kvm_vcpu *vcpu, struct kvm_nested_state __user *user_kvm_nested_state, unsigned user_data_size); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 490dba7d0504..182e5209a1f6 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3687,6 +3687,12 @@ static void nested_vmx_update_pending_dbg(struct kvm_vcpu *vcpu) vcpu->arch.exception.payload); } +static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu) +{ + return nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) && + to_vmx(vcpu)->nested.preemption_timer_expired; +} + static int vmx_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -3742,8 +3748,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) && - vmx->nested.preemption_timer_expired) { + if (nested_vmx_preemption_timer_pending(vcpu)) { if (block_nested_events) return -EBUSY; nested_vmx_vmexit(vcpu, EXIT_REASON_PREEMPTION_TIMER, 0, 0); @@ -6448,6 +6453,7 @@ __init int nested_vmx_hardware_setup(struct kvm_x86_ops *ops, struct kvm_x86_nested_ops vmx_nested_ops = { .check_events = vmx_check_nested_events, + .hv_timer_pending = nested_vmx_preemption_timer_pending, .get_state = vmx_get_nested_state, .set_state = vmx_set_nested_state, .get_vmcs12_pages = nested_get_vmcs12_pages, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8c0b77ac8dc6..ee934a88a267 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8324,6 +8324,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_x86_ops.enable_nmi_window(vcpu); if (kvm_cpu_has_injectable_intr(vcpu) || req_int_win) kvm_x86_ops.enable_irq_window(vcpu); + if (is_guest_mode(vcpu) && + kvm_x86_ops.nested_ops->hv_timer_pending && + kvm_x86_ops.nested_ops->hv_timer_pending(vcpu)) + req_immediate_exit = true; WARN_ON(vcpu->arch.exception.pending); } @@ -10179,6 +10183,11 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) if (kvm_hv_has_stimer_pending(vcpu)) return true; + if (is_guest_mode(vcpu) && + kvm_x86_ops.nested_ops->hv_timer_pending && + kvm_x86_ops.nested_ops->hv_timer_pending(vcpu)) + return true; + return false; } From patchwork Fri Apr 24 17:24:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508689 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C3F281 for ; Fri, 24 Apr 2020 17:25:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71A9420704 for ; Fri, 24 Apr 2020 17:25:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PqA4cpWy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728962AbgDXRYz (ORCPT ); Fri, 24 Apr 2020 13:24:55 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:57477 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728878AbgDXRYk (ORCPT ); Fri, 24 Apr 2020 13:24:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749079; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=MucM5opZvyl4Z4L+ZeXePRz2z1atvDYyLaene/+Wtmk=; b=PqA4cpWySJkV44gXlYm+/8842LM0+jmGFCu92ns2hRg2OR/HtSy8SbnCSIPV97+7VzM+kN xPXBvf4yeMthHRu3h4gVd0ed6XS19yk5CihGBkX8W1heztn0QM+WT1ocwWqeOq6u9m1udN DOa6nsmgsQzJXlLzuTM++XMaBbAJEZw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-244-CbvVGUzRPxGYh4gw5OFHQA-1; Fri, 24 Apr 2020 13:24:35 -0400 X-MC-Unique: CbvVGUzRPxGYh4gw5OFHQA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 32AFC8BE49C; Fri, 24 Apr 2020 17:24:26 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4B09C25277; Fri, 24 Apr 2020 17:24:25 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 07/22] KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit set Date: Fri, 24 Apr 2020 13:24:01 -0400 Message-Id: <20200424172416.243870-8-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Re-request KVM_REQ_EVENT if vcpu_enter_guest() bails after processing pending requests and an immediate exit was requested. This fixes a bug where a pending event, e.g. VMX preemption timer, is delayed and/or lost if the exit was deferred due to something other than a higher priority _injected_ event, e.g. due to a pending nested VM-Enter. This bug only affects the !injected case as kvm_x86_ops.cancel_injection() sets KVM_REQ_EVENT to redo the injection, but that's purely serendipitous behavior with respect to the deferred event. Note, emulated preemption timer isn't the only event that can be affected, it simply happens to be the only event where not re-requesting KVM_REQ_EVENT is blatantly visible to the guest. Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer") Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-4-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/x86.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ee934a88a267..8ebfebc807fd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8489,6 +8489,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) return r; cancel_injection: + if (req_immediate_exit) + kvm_make_request(KVM_REQ_EVENT, vcpu); kvm_x86_ops.cancel_injection(vcpu); if (unlikely(vcpu->arch.apic_attention)) kvm_lapic_sync_from_vapic(vcpu); From patchwork Fri Apr 24 17:24:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508685 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D3D514B4 for ; Fri, 24 Apr 2020 17:24:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DA0D20736 for ; Fri, 24 Apr 2020 17:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FLQdpEhk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728915AbgDXRYw (ORCPT ); Fri, 24 Apr 2020 13:24:52 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:23086 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728879AbgDXRYl (ORCPT ); Fri, 24 Apr 2020 13:24:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749079; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=M5zmmJBcB8NXxnUPKEdXOI3rlhBgtaPigdqzfMYvM/0=; b=FLQdpEhku4IjLbSXqAcWMQh7wE/+JE2IDtu1DkwgQJmwdin4FPoWUA72WIlKUYfh0LE6J1 4PgpyNYgM5ks4Ew5I2l68CxXGYTjVdhXbz5EMlnvN+X7yPjr6IQSVhbwUlAR6vgiesvX// W/Y1BkwCk8J8s8/cc47QJ8VYH4o1b7g= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-475-wvJvkwPlMP-TwkNyySnKEA-1; Fri, 24 Apr 2020 13:24:37 -0400 X-MC-Unique: wvJvkwPlMP-TwkNyySnKEA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3F7B41018856; Fri, 24 Apr 2020 17:24:27 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 589F61FDE1; Fri, 24 Apr 2020 17:24:26 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 08/22] KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of int Date: Fri, 24 Apr 2020 13:24:02 -0400 Message-Id: <20200424172416.243870-9-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Return an actual bool for kvm_x86_ops' {interrupt_nmi}_allowed() hook to better reflect the return semantics, and to avoid creating an even bigger mess when the related VMX code is refactored in upcoming patches. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-5-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/svm/svm.c | 16 ++++++++-------- arch/x86/kvm/vmx/vmx.c | 14 +++++++------- 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 372e6ea4af32..efaddc68a694 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1139,8 +1139,8 @@ struct kvm_x86_ops { void (*set_nmi)(struct kvm_vcpu *vcpu); void (*queue_exception)(struct kvm_vcpu *vcpu); void (*cancel_injection)(struct kvm_vcpu *vcpu); - int (*interrupt_allowed)(struct kvm_vcpu *vcpu); - int (*nmi_allowed)(struct kvm_vcpu *vcpu); + bool (*interrupt_allowed)(struct kvm_vcpu *vcpu); + bool (*nmi_allowed)(struct kvm_vcpu *vcpu); bool (*get_nmi_mask)(struct kvm_vcpu *vcpu); void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked); void (*enable_nmi_window)(struct kvm_vcpu *vcpu); @@ -1238,7 +1238,7 @@ struct kvm_x86_ops { void (*setup_mce)(struct kvm_vcpu *vcpu); - int (*smi_allowed)(struct kvm_vcpu *vcpu); + bool (*smi_allowed)(struct kvm_vcpu *vcpu); int (*pre_enter_smm)(struct kvm_vcpu *vcpu, char *smstate); int (*pre_leave_smm)(struct kvm_vcpu *vcpu, const char *smstate); int (*enable_smi_window)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 8e732eb0b5c9..cdee634e961d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3062,11 +3062,11 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) set_cr_intercept(svm, INTERCEPT_CR8_WRITE); } -static int svm_nmi_allowed(struct kvm_vcpu *vcpu) +static bool svm_nmi_allowed(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; - int ret; + bool ret; ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) && !(svm->vcpu.arch.hflags & HF_NMI_MASK); @@ -3095,14 +3095,14 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) } } -static int svm_interrupt_allowed(struct kvm_vcpu *vcpu) +static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; if (!gif_set(svm) || (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK)) - return 0; + return false; if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK)) return !!(svm->vcpu.arch.hflags & HF_HIF_MASK); @@ -3755,23 +3755,23 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) vcpu->arch.mcg_cap &= 0x1ff; } -static int svm_smi_allowed(struct kvm_vcpu *vcpu) +static bool svm_smi_allowed(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); /* Per APM Vol.2 15.22.2 "Response to SMI" */ if (!gif_set(svm)) - return 0; + return false; if (is_guest_mode(&svm->vcpu) && svm->nested.intercept & (1ULL << INTERCEPT_SMI)) { /* TODO: Might need to set exit_info_1 and exit_info_2 here */ svm->vmcb->control.exit_code = SVM_EXIT_SMI; svm->nested.exit_required = true; - return 0; + return false; } - return 1; + return true; } static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 455cd2c8dbce..c98194f04b04 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4511,21 +4511,21 @@ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) } } -static int vmx_nmi_allowed(struct kvm_vcpu *vcpu) +static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu) { if (to_vmx(vcpu)->nested.nested_run_pending) - return 0; + return false; if (!enable_vnmi && to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked) - return 0; + return false; return !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & (GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_STI | GUEST_INTR_STATE_NMI)); } -static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu) +static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu) { if (to_vmx(vcpu)->nested.nested_run_pending) return false; @@ -7675,12 +7675,12 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu) ~FEAT_CTL_LMCE_ENABLED; } -static int vmx_smi_allowed(struct kvm_vcpu *vcpu) +static bool vmx_smi_allowed(struct kvm_vcpu *vcpu) { /* we need a nested vmexit to enter SMM, postpone if run is pending */ if (to_vmx(vcpu)->nested.nested_run_pending) - return 0; - return 1; + return false; + return true; } static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) From patchwork Fri Apr 24 17:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11508683 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA29781 for ; Fri, 24 Apr 2020 17:24:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3FCF20736 for ; Fri, 24 Apr 2020 17:24:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UuWvqR9n" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728908AbgDXRYk (ORCPT ); Fri, 24 Apr 2020 13:24:40 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:40012 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728775AbgDXRYj (ORCPT ); Fri, 24 Apr 2020 13:24:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587749078; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=BKlpMaDDnpf6oM4hQ3cYP2CARvyZyLnX+wypaaAyFF8=; b=UuWvqR9n5jjNK0plWvL1Y00+0+yij6t3SSGYfSB79I4FU5V7SFpFF1+/tzd9dgj+3+1xDH ll9Kb6iXaE9MyJPB0aIUCXPtZlIAdeTZ9uuGYOGdJ/cHDLBtYKOdSWvDKLID1blHKbpwjG i44+x4Uga8aqP7bg0FwoA154LAjrnAY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-266-DyQikqukM1WYFN0UYu9Agg-1; Fri, 24 Apr 2020 13:24:36 -0400 X-MC-Unique: DyQikqukM1WYFN0UYu9Agg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4DC8B8014D6; Fri, 24 Apr 2020 17:24:28 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 650541FDE1; Fri, 24 Apr 2020 17:24:27 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 09/22] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed Date: Fri, 24 Apr 2020 13:24:03 -0400 Message-Id: <20200424172416.243870-10-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do not hardcode is_smm so that all the architectural conditions for blocking SMIs are listed in a single place. Well, in two places because this introduces some code duplication between Intel and AMD. This ensures that nested SVM obeys GIF in kvm_vcpu_has_events. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/x86.c | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cdee634e961d..01ee1c3be25b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3771,7 +3771,7 @@ static bool svm_smi_allowed(struct kvm_vcpu *vcpu) return false; } - return true; + return !is_smm(vcpu); } static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c98194f04b04..c33317bfc1cf 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7680,7 +7680,7 @@ static bool vmx_smi_allowed(struct kvm_vcpu *vcpu) /* we need a nested vmexit to enter SMM, postpone if run is pending */ if (to_vmx(vcpu)->nested.nested_run_pending) return false; - return true; + return !is_smm(vcpu); } static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8ebfebc807fd..e8469db6ccae 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7744,8 +7744,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) if (kvm_event_needs_reinjection(vcpu)) return 0; - if (vcpu->arch.smi_pending && !is_smm(vcpu) && - kvm_x86_ops.smi_allowed(vcpu)) { + if (vcpu->arch.smi_pending && kvm_x86_ops.smi_allowed(vcpu)) { vcpu->arch.smi_pending = false; ++vcpu->arch.smi_count; enter_smm(vcpu); @@ -10174,7 +10173,8 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) return true; if (kvm_test_request(KVM_REQ_SMI, vcpu) || - (vcpu->arch.smi_pending && !is_smm(vcpu))) + (vcpu->arch.smi_pending && + kvm_x86_ops.smi_allowed(vcpu))) return true; if (kvm_arch_interrupt_allowed(vcpu) && From patchwork Sat Apr 25 07:01:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509649 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 93A2E81 for ; Sat, 25 Apr 2020 07:02:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7629720767 for ; Sat, 25 Apr 2020 07:02:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fPW2v+Bz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726112AbgDYHCB (ORCPT ); Sat, 25 Apr 2020 03:02:01 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:33261 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725837AbgDYHCB (ORCPT ); Sat, 25 Apr 2020 03:02:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798119; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=uDA/5O80YM9gdD8DpCNbEwP3+qoIae/OzT4ByYwbY3s=; b=fPW2v+Bz/umXtKnUNZGXYSV8UWjGFPTu6VhCE2ZviUajecUKdY5qxTfty+GCDG69SPsXeV pqVle9iSzq3bV+MUp2ZGR5qbOSQGi5r35Z7rg6EK1TTBcUtcQmGxVEb+bovCJSrObTM4eC FnJKDfBeol0MNmrbvKk2d5N8x42W9jo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-336-aKNp2ORpO_2557YZJpUQ7A-1; Sat, 25 Apr 2020 03:01:57 -0400 X-MC-Unique: aKNp2ORpO_2557YZJpUQ7A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 12296800580; Sat, 25 Apr 2020 07:01:56 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id F01CF5D9C5; Sat, 25 Apr 2020 07:01:54 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 10/22] KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is set Date: Sat, 25 Apr 2020 03:01:42 -0400 Message-Id: <20200425070154.251290-1-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Report NMIs as allowed when the vCPU is in L2 and L2 is being run with Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective in this case. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-7-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/nested.c | 5 ----- arch/x86/kvm/vmx/nested.h | 5 +++++ arch/x86/kvm/vmx/vmx.c | 3 +++ 3 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 182e5209a1f6..9edd9464fedf 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -698,11 +698,6 @@ static bool nested_exit_intr_ack_set(struct kvm_vcpu *vcpu) VM_EXIT_ACK_INTR_ON_EXIT; } -static bool nested_exit_on_nmi(struct kvm_vcpu *vcpu) -{ - return nested_cpu_has_nmi_exiting(get_vmcs12(vcpu)); -} - static int nested_vmx_check_apic_access_controls(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 7ce9572c3d3a..5cc72ae0e277 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -225,6 +225,11 @@ static inline bool nested_cpu_has_save_preemption_timer(struct vmcs12 *vmcs12) VM_EXIT_SAVE_VMX_PREEMPTION_TIMER; } +static inline bool nested_exit_on_nmi(struct kvm_vcpu *vcpu) +{ + return nested_cpu_has_nmi_exiting(get_vmcs12(vcpu)); +} + /* * In nested virtualization, check if L1 asked to exit on external interrupts. * For most existing hypervisors, this will always return true. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c33317bfc1cf..37b1986a4e8f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4516,6 +4516,9 @@ static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu) if (to_vmx(vcpu)->nested.nested_run_pending) return false; + if (is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu)) + return true; + if (!enable_vnmi && to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked) return false; From patchwork Sat Apr 25 07:01:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509671 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1913081 for ; Sat, 25 Apr 2020 07:03:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 010362098B for ; Sat, 25 Apr 2020 07:03:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZyI5MeAu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726398AbgDYHDB (ORCPT ); Sat, 25 Apr 2020 03:03:01 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:46163 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726179AbgDYHCE (ORCPT ); Sat, 25 Apr 2020 03:02:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=xS3zfh8HUFB5rg6jSc/FF6GSS9oAePJV5pV8SbzL1TQ=; b=ZyI5MeAuc1MCMAj2Rgd2cVS49qAt1UMXUXn6xSZbF8XLTKY84qJ6wGEezkeaC6rKMO6U/e FeExkwKCz6ORcZsVbW36NqjqJk1+xgxrqEkt0kmvZUiXtR32lBKOAEEOs6bynu+HnQvDGn CqAu1J4gdCG9JoMzQeQUXi4Dp4Rz00U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-323-h5roe3JTPomdM8IER2MUyw-1; Sat, 25 Apr 2020 03:01:58 -0400 X-MC-Unique: h5roe3JTPomdM8IER2MUyw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E52041800D42; Sat, 25 Apr 2020 07:01:56 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2BE635D9C5; Sat, 25 Apr 2020 07:01:56 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 11/22] KVM: nSVM: Report NMIs as allowed when in L2 and Exit-on-NMI is set Date: Sat, 25 Apr 2020 03:01:43 -0400 Message-Id: <20200425070154.251290-2-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Report NMIs as allowed when the vCPU is in L2 and L2 is being run with Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective in this case. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 5 ----- arch/x86/kvm/svm/svm.c | 3 +++ arch/x86/kvm/svm/svm.h | 5 +++++ 3 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c3650efd2e89..748b01220aac 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -776,11 +776,6 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, return vmexit; } -static bool nested_exit_on_nmi(struct vcpu_svm *svm) -{ - return (svm->nested.intercept & (1ULL << INTERCEPT_NMI)); -} - static void nested_svm_nmi(struct vcpu_svm *svm) { svm->vmcb->control.exit_code = SVM_EXIT_NMI; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 01ee1c3be25b..3f1f80737f9e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3068,6 +3068,9 @@ static bool svm_nmi_allowed(struct kvm_vcpu *vcpu) struct vmcb *vmcb = svm->vmcb; bool ret; + if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm)) + return true; + ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) && !(svm->vcpu.arch.hflags & HF_NMI_MASK); ret = ret && gif_set(svm); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index a2bc33aadb67..d8ae654340d4 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -378,6 +378,11 @@ static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu) return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK); } +static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) +{ + return (svm->nested.intercept & (1ULL << INTERCEPT_NMI)); +} + void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb, struct kvm_host_map *map); int nested_svm_vmrun(struct vcpu_svm *svm); From patchwork Sat Apr 25 07:01:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509673 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4C9814DD for ; Sat, 25 Apr 2020 07:03:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD3062098B for ; Sat, 25 Apr 2020 07:03:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VneD+MYM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726147AbgDYHCD (ORCPT ); Sat, 25 Apr 2020 03:02:03 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:21948 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726113AbgDYHCC (ORCPT ); Sat, 25 Apr 2020 03:02:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798121; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=tQ0DnedDERp8UnEgcFBdDKx3t9qOBvJdAkWOzvyaHK4=; b=VneD+MYM0wsf45lLP43OBSdGaOStcQEKsYzl9VYIXjGSzVJjsuC0uMWv6IOmB+dUebdJ7B fQjYekjQfk1oKx8jNrtIDEdrupkKMYBGHPiHV2uO380a0P1+ww8dTWSB+w7Hotqy8v4VpW wvOuhVgMXdBty6cN8voXtMXP9ZAXIWM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-14-w35gB8JFPNqmo2pm04YuIw-1; Sat, 25 Apr 2020 03:01:59 -0400 X-MC-Unique: w35gB8JFPNqmo2pm04YuIw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B882845F; Sat, 25 Apr 2020 07:01:57 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0CF125D9C5; Sat, 25 Apr 2020 07:01:56 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 12/22] KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events() Date: Sat, 25 Apr 2020 03:01:44 -0400 Message-Id: <20200425070154.251290-3-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unlike VMX, SVM allows a hypervisor to take a SMI vmexit without having any special SMM-monitor enablement sequence. Therefore, it has to be handled like interrupts and NMIs. Check for an unblocked SMI in svm_check_nested_events() so that pending SMIs are correctly prioritized over IRQs and NMIs when the latter events will trigger VM-Exit. Note that there is no need to test explicitly for SMI vmexits, because guests always runs outside SMM and therefore can never get an SMI while they are blocked. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 16 ++++++++++++++++ arch/x86/kvm/svm/svm.c | 8 -------- arch/x86/kvm/svm/svm.h | 5 +++++ 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 748b01220aac..226d5a0d677b 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -776,6 +776,15 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, return vmexit; } +static void nested_svm_smi(struct vcpu_svm *svm) +{ + svm->vmcb->control.exit_code = SVM_EXIT_SMI; + svm->vmcb->control.exit_info_1 = 0; + svm->vmcb->control.exit_info_2 = 0; + + nested_svm_vmexit(svm); +} + static void nested_svm_nmi(struct vcpu_svm *svm) { svm->vmcb->control.exit_code = SVM_EXIT_NMI; @@ -807,6 +816,13 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required || svm->nested.nested_run_pending; + if (vcpu->arch.smi_pending && nested_exit_on_smi(svm)) { + if (block_nested_events) + return -EBUSY; + nested_svm_smi(svm); + return 0; + } + if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) { if (block_nested_events) return -EBUSY; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 3f1f80737f9e..1f9577b281e6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3766,14 +3766,6 @@ static bool svm_smi_allowed(struct kvm_vcpu *vcpu) if (!gif_set(svm)) return false; - if (is_guest_mode(&svm->vcpu) && - svm->nested.intercept & (1ULL << INTERCEPT_SMI)) { - /* TODO: Might need to set exit_info_1 and exit_info_2 here */ - svm->vmcb->control.exit_code = SVM_EXIT_SMI; - svm->nested.exit_required = true; - return false; - } - return !is_smm(vcpu); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index d8ae654340d4..4dc6d2b4b721 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -378,6 +378,11 @@ static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu) return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK); } +static inline bool nested_exit_on_smi(struct vcpu_svm *svm) +{ + return (svm->nested.intercept & (1ULL << INTERCEPT_SMI)); +} + static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) { return (svm->nested.intercept & (1ULL << INTERCEPT_NMI)); From patchwork Sat Apr 25 07:01:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509665 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19F5281 for ; Sat, 25 Apr 2020 07:02:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02D032098B for ; Sat, 25 Apr 2020 07:02:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="F32QWdSa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726295AbgDYHCG (ORCPT ); Sat, 25 Apr 2020 03:02:06 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:51009 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726188AbgDYHCF (ORCPT ); Sat, 25 Apr 2020 03:02:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798124; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=Zhop02cboELmmyjJa4pTw5otGh2YRyzNp42T5e+hcbY=; b=F32QWdSaIa4u31G8zsCSSHpy68Jn311I0mSQQFW7NGMzndASnRXk4f62syDZd+rqq832Yy X6JsBXyK6qQHPb3gR5V0Mkk/nmR3s2ur2oHxMoKDLHmGv3caWRK43FvshQV9LMtojlyVKA Q1oTI6i1eh7Ogc8Z7KX8Uizp5bw5bpk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-249-L3qCV47DNH-5TfaqJEYYqA-1; Sat, 25 Apr 2020 03:01:59 -0400 X-MC-Unique: L3qCV47DNH-5TfaqJEYYqA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8CED11005510; Sat, 25 Apr 2020 07:01:58 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id D32835D9C5; Sat, 25 Apr 2020 07:01:57 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 13/22] KVM: VMX: Split out architectural interrupt/NMI blocking checks Date: Sat, 25 Apr 2020 03:01:45 -0400 Message-Id: <20200425070154.251290-4-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Move the architectural (non-KVM specific) interrupt/NMI blocking checks to a separate helper so that they can be used in a future patch by vmx_check_nested_events(). No functional change intended. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-8-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/vmx.c | 35 ++++++++++++++++++++++------------- arch/x86/kvm/vmx/vmx.h | 2 ++ 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 37b1986a4e8f..7fb7dcb3d94d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4511,21 +4511,35 @@ void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) } } +bool vmx_nmi_blocked(struct kvm_vcpu *vcpu) +{ + if (is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu)) + return false; + + if (!enable_vnmi && to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked) + return true; + + return (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & + (GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_STI | + GUEST_INTR_STATE_NMI)); +} + static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu) { if (to_vmx(vcpu)->nested.nested_run_pending) return false; - if (is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu)) - return true; + return !vmx_nmi_blocked(vcpu); +} - if (!enable_vnmi && - to_vmx(vcpu)->loaded_vmcs->soft_vnmi_blocked) +bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu) +{ + if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) return false; - return !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & - (GUEST_INTR_STATE_MOV_SS | GUEST_INTR_STATE_STI - | GUEST_INTR_STATE_NMI)); + return !(vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) || + (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & + (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); } static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu) @@ -4533,12 +4547,7 @@ static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu) if (to_vmx(vcpu)->nested.nested_run_pending) return false; - if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) - return true; - - return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) && - !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & - (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); + return !vmx_interrupt_blocked(vcpu); } static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index edfb739e5907..b5e773267abe 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -344,6 +344,8 @@ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa); void update_exception_bitmap(struct kvm_vcpu *vcpu); void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu); +bool vmx_nmi_blocked(struct kvm_vcpu *vcpu); +bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu); bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu); void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked); void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); From patchwork Sat Apr 25 07:01:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509667 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D58514DD for ; Sat, 25 Apr 2020 07:02:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8582F21655 for ; Sat, 25 Apr 2020 07:02:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LM5JHo4n" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726453AbgDYHCq (ORCPT ); Sat, 25 Apr 2020 03:02:46 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:29411 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726230AbgDYHCG (ORCPT ); Sat, 25 Apr 2020 03:02:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798124; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=MsffsW/Ybv32snRIiX79A6qL6RuUG9dFSSyzWZBtQSo=; b=LM5JHo4nc01/NU4gJrNTDDVb5Vd+H9xYVYCkdih/y+TP2jHaQfbXJ5pwtbwvwSz8gP9awx EvagPPenq0obtaM7ifO4PKmIBwg8nrj7R+O+TnwF6eKXtvC4lpuo+aVbgwCCziAWYEFL+B ROgD2U1DyZqBTkqspYOwKRUfvi2aw8E= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-374-d_v5AaPbNJSYOT_En_BfeA-1; Sat, 25 Apr 2020 03:02:00 -0400 X-MC-Unique: d_v5AaPbNJSYOT_En_BfeA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 613F8835B40; Sat, 25 Apr 2020 07:01:59 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id A7D8E5D9C5; Sat, 25 Apr 2020 07:01:58 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 14/22] KVM: SVM: Split out architectural interrupt/NMI/SMI blocking checks Date: Sat, 25 Apr 2020 03:01:46 -0400 Message-Id: <20200425070154.251290-5-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the architectural (non-KVM specific) interrupt/NMI/SMI blocking checks to a separate helper so that they can be used in a future patch by svm_check_nested_events(). No functional change intended. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/svm.c | 51 +++++++++++++++++++++++++++++++++--------- arch/x86/kvm/svm/svm.h | 3 +++ 2 files changed, 43 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 1f9577b281e6..4e284b62d384 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3062,22 +3062,33 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) set_cr_intercept(svm, INTERCEPT_CR8_WRITE); } -static bool svm_nmi_allowed(struct kvm_vcpu *vcpu) +bool svm_nmi_blocked(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; bool ret; - if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm)) + if (!gif_set(svm)) return true; - ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) && - !(svm->vcpu.arch.hflags & HF_NMI_MASK); - ret = ret && gif_set(svm); + if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm)) + return false; + + ret = (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) || + (svm->vcpu.arch.hflags & HF_NMI_MASK); return ret; } +static bool svm_nmi_allowed(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + if (svm->nested.nested_run_pending) + return false; + + return !svm_nmi_blocked(vcpu); +} + static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -3098,19 +3109,28 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) } } -static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu) +bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; if (!gif_set(svm) || (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK)) - return false; + return true; if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK)) - return !!(svm->vcpu.arch.hflags & HF_HIF_MASK); + return !(svm->vcpu.arch.hflags & HF_HIF_MASK); else - return !!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF); + return !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF); +} + +static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + if (svm->nested.nested_run_pending) + return false; + + return !svm_interrupt_blocked(vcpu); } static void enable_irq_window(struct kvm_vcpu *vcpu) @@ -3758,15 +3778,24 @@ static void svm_setup_mce(struct kvm_vcpu *vcpu) vcpu->arch.mcg_cap &= 0x1ff; } -static bool svm_smi_allowed(struct kvm_vcpu *vcpu) +bool svm_smi_blocked(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); /* Per APM Vol.2 15.22.2 "Response to SMI" */ if (!gif_set(svm)) + return true; + + return is_smm(vcpu); +} + +static bool svm_smi_allowed(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + if (svm->nested.nested_run_pending) return false; - return !is_smm(vcpu); + return !svm_smi_blocked(vcpu); } static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 4dc6d2b4b721..f6d604b72a4c 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -366,6 +366,9 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); void svm_flush_tlb(struct kvm_vcpu *vcpu); void disable_nmi_singlestep(struct vcpu_svm *svm); +bool svm_smi_blocked(struct kvm_vcpu *vcpu); +bool svm_nmi_blocked(struct kvm_vcpu *vcpu); +bool svm_interrupt_blocked(struct kvm_vcpu *vcpu); /* nested.c */ From patchwork Sat Apr 25 07:01:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509669 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14CE614DD for ; Sat, 25 Apr 2020 07:02:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F229621835 for ; Sat, 25 Apr 2020 07:02:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CdiNEhCG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726476AbgDYHC6 (ORCPT ); Sat, 25 Apr 2020 03:02:58 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:38238 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725837AbgDYHCF (ORCPT ); Sat, 25 Apr 2020 03:02:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798124; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=AgRaTvlZ0YOO3h0VcTooJDKwBeBo3GSedwNk5gifeLI=; b=CdiNEhCGgjck056f7YiURFJZ0ft3aPnZ7TVN5k2jmhaFGPoa8IVY6dloJcf+bT/isRrgdK wn6T/wZmt/lwLNY/V7xap1rYPagaOElsMOKwvHqlNa4XwqA6kofq+CDtN9inb7witUvFb5 AF3+i1qzY4wgaZHoeswEo6QiFKz26O8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-144-SxHZjiXmPK-3axcf6CAamQ-1; Sat, 25 Apr 2020 03:02:01 -0400 X-MC-Unique: SxHZjiXmPK-3axcf6CAamQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 35124460; Sat, 25 Apr 2020 07:02:00 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7BB9F5D9C5; Sat, 25 Apr 2020 07:01:59 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 15/22] KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behavior Date: Sat, 25 Apr 2020 03:01:47 -0400 Message-Id: <20200425070154.251290-6-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI is pending and needs to be injected into L2, priority between coincident events is not dependent on exiting behavior. Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs") Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-9-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/nested.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9edd9464fedf..188ff4cfdbaf 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3750,9 +3750,12 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (vcpu->arch.nmi_pending && nested_exit_on_nmi(vcpu)) { + if (vcpu->arch.nmi_pending && !vmx_nmi_blocked(vcpu)) { if (block_nested_events) return -EBUSY; + if (!nested_exit_on_nmi(vcpu)) + goto no_vmexit; + nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, NMI_VECTOR | INTR_TYPE_NMI_INTR | INTR_INFO_VALID_MASK, 0); @@ -3765,9 +3768,11 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } - if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(vcpu)) { + if (kvm_cpu_has_interrupt(vcpu) && !vmx_interrupt_blocked(vcpu)) { if (block_nested_events) return -EBUSY; + if (!nested_exit_on_intr(vcpu)) + goto no_vmexit; nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0); return 0; } From patchwork Sat Apr 25 07:01:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509655 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9680481 for ; Sat, 25 Apr 2020 07:02:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E47320767 for ; Sat, 25 Apr 2020 07:02:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MsQ7Hmq9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726385AbgDYHCU (ORCPT ); Sat, 25 Apr 2020 03:02:20 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:58543 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726303AbgDYHCH (ORCPT ); Sat, 25 Apr 2020 03:02:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798126; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=eX6H79K52YklTNgCmP3prwtALtJImflR75Cxxh6oi+8=; b=MsQ7Hmq9vF/0YwCmj5tR1IP0LD/efMvjay177xIpSuRvT8bbrU/tGRarx7m+hk2yPXQ+Bv 55bEkhh7CHcwW82dvSYPLzs4B0T3tEHZvqF0WjKM9Tthb0klqXWvDRBeQLLIcG+k7N9JCw 3LpzVQYLvVGqWaTLpJj3KXwyLWYaJy4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-248-to2DLAhsMW6MYLaVz8JyoA-1; Sat, 25 Apr 2020 03:02:02 -0400 X-MC-Unique: to2DLAhsMW6MYLaVz8JyoA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0B8381054F93; Sat, 25 Apr 2020 07:02:01 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 50B4A5D9C5; Sat, 25 Apr 2020 07:02:00 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 16/22] KVM: nVMX: Prioritize SMI over nested IRQ/NMI Date: Sat, 25 Apr 2020 03:01:48 -0400 Message-Id: <20200425070154.251290-7-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Check for an unblocked SMI in vmx_check_nested_events() so that pending SMIs are correctly prioritized over IRQs and NMIs when the latter events will trigger VM-Exit. This also fixes an issue where an SMI that was marked pending while processing a nested VM-Enter wouldn't trigger an immediate exit, i.e. would be incorrectly delayed until L2 happened to take a VM-Exit. Fixes: 64d6067057d96 ("KVM: x86: stubs for SMM support") Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-10-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/nested.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 188ff4cfdbaf..2c36f3f53108 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3750,6 +3750,12 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu) return 0; } + if (vcpu->arch.smi_pending && !is_smm(vcpu)) { + if (block_nested_events) + return -EBUSY; + goto no_vmexit; + } + if (vcpu->arch.nmi_pending && !vmx_nmi_blocked(vcpu)) { if (block_nested_events) return -EBUSY; From patchwork Sat Apr 25 07:01:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509653 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0599114DD for ; Sat, 25 Apr 2020 07:02:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E36292098B for ; Sat, 25 Apr 2020 07:02:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="VHpGcwkV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726337AbgDYHCI (ORCPT ); Sat, 25 Apr 2020 03:02:08 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:51645 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726307AbgDYHCI (ORCPT ); Sat, 25 Apr 2020 03:02:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798127; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=ASlviBxFLp+Z3Ulj6MCrDwXnWDa6GkiqlDL/SqVZeZg=; b=VHpGcwkVD7qu0fqoSIb5ZcEuRgZfeazIWuYLkAw4PWS+iYtnpY3tawvK6r02kyGhoKQjqw HBCOamNOfsq7mn+KEPVWL6m41psvNljI6gDULgDgFNP7a0vt/tdI3Rhc3+7w/vHkWQMTZe bP8ay/HK7Lk2DLZmYkkZXqBXSh6TqcI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-286-K5LGcn-VOBe4kFRGf0a0lQ-1; Sat, 25 Apr 2020 03:02:03 -0400 X-MC-Unique: K5LGcn-VOBe4kFRGf0a0lQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D7515107ACF5; Sat, 25 Apr 2020 07:02:01 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 27F955D9C5; Sat, 25 Apr 2020 07:02:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 17/22] KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is set Date: Sat, 25 Apr 2020 03:01:49 -0400 Message-Id: <20200425070154.251290-8-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Report interrupts as allowed when the vCPU is in L2 and L2 is being run with exit-on-interrupts enabled and EFLAGS.IF=1 (either on the host or on the guest according to VINTR). Interrupts are always unblocked from L1's perspective in this case. While moving nested_exit_on_intr to svm.h, use INTERCEPT_INTR properly instead of assuming it's zero (which it is of course). Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 5 ----- arch/x86/kvm/svm/svm.c | 23 +++++++++++++++++------ arch/x86/kvm/svm/svm.h | 5 +++++ 3 files changed, 22 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 226d5a0d677b..a7a3d695405e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -805,11 +805,6 @@ static void nested_svm_intr(struct vcpu_svm *svm) nested_svm_vmexit(svm); } -static bool nested_exit_on_intr(struct vcpu_svm *svm) -{ - return (svm->nested.intercept & 1ULL); -} - static int svm_check_nested_events(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4e284b62d384..0bbb2cd24eb7 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3114,14 +3114,25 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); struct vmcb *vmcb = svm->vmcb; - if (!gif_set(svm) || - (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK)) + if (!gif_set(svm)) return true; - if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK)) - return !(svm->vcpu.arch.hflags & HF_HIF_MASK); - else - return !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF); + if (is_guest_mode(vcpu)) { + /* As long as interrupts are being delivered... */ + if ((svm->vcpu.arch.hflags & HF_VINTR_MASK) + ? !(svm->vcpu.arch.hflags & HF_HIF_MASK) + : !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) + return true; + + /* ... vmexits aren't blocked by the interrupt shadow */ + if (nested_exit_on_intr(svm)) + return false; + } else { + if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) + return true; + } + + return (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK); } static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index f6d604b72a4c..5cc559ab862d 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -386,6 +386,11 @@ static inline bool nested_exit_on_smi(struct vcpu_svm *svm) return (svm->nested.intercept & (1ULL << INTERCEPT_SMI)); } +static inline bool nested_exit_on_intr(struct vcpu_svm *svm) +{ + return (svm->nested.intercept & (1ULL << INTERCEPT_INTR)); +} + static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) { return (svm->nested.intercept & (1ULL << INTERCEPT_NMI)); From patchwork Sat Apr 25 07:01:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509661 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD8F881 for ; Sat, 25 Apr 2020 07:02:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B2E42098B for ; Sat, 25 Apr 2020 07:02:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="F2c5lcyf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726409AbgDYHCY (ORCPT ); Sat, 25 Apr 2020 03:02:24 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:41866 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726271AbgDYHCH (ORCPT ); Sat, 25 Apr 2020 03:02:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798126; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=qRicctAkr3eINW616/AMpnm0J0Vm/B504ODY+T3FuoI=; b=F2c5lcyfu/r9lsyvS4GGGIhD6m1M3sYBjGxkZmDpA1flAwDDoWz1Hx7ithobAdbqJAy3ds lbQoFHOnMVgiJijtZ2Xr9k7b+gon9Xamtf/IEM8vlPaUZETAzF65npHaIHd2ndAXYhpORI XipuOhm3/6nEXfSo33CEEg4ARGYU8dw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-417-V6c31jCOM6e6KeYImgCBEg-1; Sat, 25 Apr 2020 03:02:04 -0400 X-MC-Unique: V6c31jCOM6e6KeYImgCBEg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B0423462; Sat, 25 Apr 2020 07:02:02 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id F0EB05D9C5; Sat, 25 Apr 2020 07:02:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 18/22] KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behavior Date: Sat, 25 Apr 2020 03:01:50 -0400 Message-Id: <20200425070154.251290-9-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI/SMI is pending and needs to be injected into L2, priority between coincident events is not dependent on exiting behavior. Fixes: b518ba9fa691 ("KVM: nSVM: implement check_nested_events for interrupts") Signed-off-by: Paolo Bonzini --- arch/x86/kvm/svm/nested.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index a7a3d695405e..7c86ccb0e939 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -811,23 +811,29 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu) kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required || svm->nested.nested_run_pending; - if (vcpu->arch.smi_pending && nested_exit_on_smi(svm)) { + if (vcpu->arch.smi_pending && !svm_smi_blocked(vcpu)) { if (block_nested_events) return -EBUSY; + if (!nested_exit_on_smi(svm)) + return 0; nested_svm_smi(svm); return 0; } - if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) { + if (vcpu->arch.nmi_pending && !svm_nmi_blocked(vcpu)) { if (block_nested_events) return -EBUSY; + if (!nested_exit_on_nmi(svm)) + return 0; nested_svm_nmi(svm); return 0; } - if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) { + if (kvm_cpu_has_interrupt(vcpu) && !svm_interrupt_blocked(vcpu)) { if (block_nested_events) return -EBUSY; + if (!nested_exit_on_intr(svm)) + return 0; nested_svm_intr(svm); return 0; } From patchwork Sat Apr 25 07:01:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509651 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6EBB714DD for ; Sat, 25 Apr 2020 07:02:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5656B2098B for ; Sat, 25 Apr 2020 07:02:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NRVTAHVU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726378AbgDYHCN (ORCPT ); Sat, 25 Apr 2020 03:02:13 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:25031 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726343AbgDYHCK (ORCPT ); Sat, 25 Apr 2020 03:02:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798129; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=hOqzqNt3g35oGlWmdR+OwR0lzPFCoYe2O5p/oF2M6rc=; b=NRVTAHVUtqgv0M8TvUSknRyq0T2yuOLsmygeB5+HskSSW2NIa20PMBBCEKS0F+YWARvRho 7nUtZxNQbdc3wrOLJDYlYddRnlZ2WYOekMIZaT4rztcH9K0SvMUZ0mf/qLM6siVrs8Pzdh XVqGjfDWLh0iBTupxGJAiIPsA9lsxFc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-496-x65JYobjPbyWxNDlk1wnuA-1; Sat, 25 Apr 2020 03:02:04 -0400 X-MC-Unique: x65JYobjPbyWxNDlk1wnuA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9381C800580; Sat, 25 Apr 2020 07:02:03 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id C7F285D9C5; Sat, 25 Apr 2020 07:02:02 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 19/22] KVM: x86: WARN on injected+pending exception even in nested case Date: Sat, 25 Apr 2020 03:01:51 -0400 Message-Id: <20200425070154.251290-10-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson WARN if a pending exception is coincident with an injected exception before calling check_nested_events() so that the WARN will fire even if inject_pending_event() bails early because check_nested_events() detects the conflict. Bailing early isn't problematic (quite the opposite), but suppressing the WARN is undesirable as it could mask a bug elsewhere in KVM. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-11-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/x86.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e8469db6ccae..44b9d834621c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7693,6 +7693,9 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) kvm_x86_ops.set_irq(vcpu); } + WARN_ON_ONCE(vcpu->arch.exception.injected && + vcpu->arch.exception.pending); + /* * Call check_nested_events() even if we reinjected a previous event * in order for caller to determine if it should require immediate-exit @@ -7711,7 +7714,6 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) vcpu->arch.exception.has_error_code, vcpu->arch.exception.error_code); - WARN_ON_ONCE(vcpu->arch.exception.injected); vcpu->arch.exception.pending = false; vcpu->arch.exception.injected = true; From patchwork Sat Apr 25 07:02:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509657 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05CDA14DD for ; Sat, 25 Apr 2020 07:02:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E29FB21927 for ; Sat, 25 Apr 2020 07:02:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bGsac07W" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726437AbgDYHCa (ORCPT ); Sat, 25 Apr 2020 03:02:30 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:40211 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726400AbgDYHCY (ORCPT ); Sat, 25 Apr 2020 03:02:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798143; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=F5d2hZyoWLkLFhOgbvoh0l9Xmnu5vZ/wvjbqCedo71Y=; b=bGsac07WjK6csCi/CpgUbF+jODT5gSccV5NLXO7PnmpDQjvPbHgexTiV9CEt6TfIZQAkQh m246cAqNwPOpBpK9DUqolYsDR5jNJyOD8IXAdtctjPMNSFVLZJE797LNFMcen9pMuSuPa/ ZKSs/mMwL9hVXI9Oo8+sjaf7MPIqXuA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-342-hhz45wqyOoOewXgRnfFx9g-1; Sat, 25 Apr 2020 03:02:17 -0400 X-MC-Unique: hhz45wqyOoOewXgRnfFx9g-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B1E3C462; Sat, 25 Apr 2020 07:02:16 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id F3918600E8; Sat, 25 Apr 2020 07:02:15 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 20/22] KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit() Date: Sat, 25 Apr 2020 03:02:13 -0400 Message-Id: <20200425070215.251397-1-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Use vmx_interrupt_blocked() instead of bouncing through vmx_interrupt_allowed() when handling edge cases in vmx_handle_exit(). The nested_run_pending check in vmx_interrupt_allowed() should never evaluate true in the VM-Exit path. Hoist the WARN in handle_invalid_guest_state() up to vmx_handle_exit() to enforce the above assumption for the !enable_vnmi case, and to detect any other potential bugs with nested VM-Enter. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-12-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/vmx.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 7fb7dcb3d94d..0e5eacd6f911 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5268,18 +5268,11 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu) bool intr_window_requested; unsigned count = 130; - /* - * We should never reach the point where we are emulating L2 - * due to invalid guest state as that means we incorrectly - * allowed a nested VMEntry with an invalid vmcs12. - */ - WARN_ON_ONCE(vmx->emulation_required && vmx->nested.nested_run_pending); - intr_window_requested = exec_controls_get(vmx) & CPU_BASED_INTR_WINDOW_EXITING; while (vmx->emulation_required && count-- != 0) { - if (intr_window_requested && vmx_interrupt_allowed(vcpu)) + if (intr_window_requested && !vmx_interrupt_blocked(vcpu)) return handle_interrupt_window(&vmx->vcpu); if (kvm_test_request(KVM_REQ_EVENT, vcpu)) @@ -5896,6 +5889,14 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, if (enable_pml) vmx_flush_pml_buffer(vcpu); + /* + * We should never reach this point with a pending nested VM-Enter, and + * more specifically emulation of L2 due to invalid guest state (see + * below) should never happen as that means we incorrectly allowed a + * nested VM-Enter with an invalid vmcs12. + */ + WARN_ON_ONCE(vmx->nested.nested_run_pending); + /* If guest state is invalid, start emulating */ if (vmx->emulation_required) return handle_invalid_guest_state(vcpu); @@ -5962,7 +5963,7 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, if (unlikely(!enable_vnmi && vmx->loaded_vmcs->soft_vnmi_blocked)) { - if (vmx_interrupt_allowed(vcpu)) { + if (!vmx_interrupt_blocked(vcpu)) { vmx->loaded_vmcs->soft_vnmi_blocked = 0; } else if (vmx->loaded_vmcs->vnmi_blocked_time > 1000000000LL && vcpu->arch.nmi_pending) { From patchwork Sat Apr 25 07:02:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509663 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 033D481 for ; Sat, 25 Apr 2020 07:02:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE86020857 for ; Sat, 25 Apr 2020 07:02:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UwkN3maF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726404AbgDYHCo (ORCPT ); Sat, 25 Apr 2020 03:02:44 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:41024 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726396AbgDYHCX (ORCPT ); Sat, 25 Apr 2020 03:02:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798143; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=HZm6DwXTwnx3KiIUACMsSmVOaGpBtskQUoUJSCSU4Rk=; b=UwkN3maF7BJQptgnM1/4776MctmZ9m2sCLWi2DzvI8mFD/5Zc1pFC7g1wKI379J1xmtasX QtQ3O8mwz+8xQpj4f4WTtd5ri1GOOMK8Kh4q9awk7rpRhmxerWwVkLzDCBG8ABKlCjE4HN tVoa2XDDR1Z0QnN2EUZ6b4d2NpCIhnw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-380-4ZB6agrCM12kT8-FC1h6-w-1; Sat, 25 Apr 2020 03:02:18 -0400 X-MC-Unique: 4ZB6agrCM12kT8-FC1h6-w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 888271895950; Sat, 25 Apr 2020 07:02:17 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id CB2DB600E8; Sat, 25 Apr 2020 07:02:16 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 21/22] KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked() Date: Sat, 25 Apr 2020 03:02:14 -0400 Message-Id: <20200425070215.251397-2-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson Use vmx_get_rflags() instead of manually reading vmcs.GUEST_RFLAGS when querying RFLAGS.IF so that multiple checks against interrupt blocking in a single run loop only require a single VMREAD. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-14-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0e5eacd6f911..9e2ba1f96cd1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4537,7 +4537,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu) if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) return false; - return !(vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) || + return !(vmx_get_rflags(vcpu) & X86_EFLAGS_IF) || (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); } From patchwork Sat Apr 25 07:02:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 11509659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 922AC14DD for ; Sat, 25 Apr 2020 07:02:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 738F02098B for ; Sat, 25 Apr 2020 07:02:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PQTaKJjX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726431AbgDYHCa (ORCPT ); Sat, 25 Apr 2020 03:02:30 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:32302 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726278AbgDYHC2 (ORCPT ); Sat, 25 Apr 2020 03:02:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587798146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=CneAOrsq0789icg4fZmRZ78yyM0UfIEH4gc21K3W3eI=; b=PQTaKJjXUumMvgeaOdQyf9LOcvhIAEA/W2CTUQcZarD2AU/YPF7aBC+nif0DU5I/GWZNZv ieFfo9euXS7qGWQ/ukrx5dmd8Zny3K9h9FtzcZK00YksArap9HTcHXkjeoNCYOWHTOCNBr XnZVeEWDopgNSdR8KECSbtRE1tzOLew= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-392-xXnlrmfYOsGyvwk4lPi7ow-1; Sat, 25 Apr 2020 03:02:19 -0400 X-MC-Unique: xXnlrmfYOsGyvwk4lPi7ow-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5F2C51895951; Sat, 25 Apr 2020 07:02:18 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id A3CA0600E8; Sat, 25 Apr 2020 07:02:17 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: wei.huang2@amd.com, cavery@redhat.com, vkuznets@redhat.com, Sean Christopherson , Oliver Upton , Jim Mattson Subject: [PATCH v2 22/22] KVM: x86: Replace late check_nested_events() hack with more precise fix Date: Sat, 25 Apr 2020 03:02:15 -0400 Message-Id: <20200425070215.251397-3-pbonzini@redhat.com> In-Reply-To: <20200424172416.243870-1-pbonzini@redhat.com> References: <20200424172416.243870-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an argument to interrupt_allowed and nmi_allowed, to checking if interrupt injection is blocked. Use the hook to handle the case where an interrupt arrives between check_nested_events() and the injection logic. Drop the retry of check_nested_events() that hack-a-fixed the same condition. Blocking injection is also a bit of a hack, e.g. KVM should do exiting and non-exiting interrupt processing in a single pass, but it's a more precise hack. The old comment is also misleading, e.g. KVM_REQ_EVENT is purely an optimization, setting it on every run loop (which KVM doesn't do) should not affect functionality, only performance. Signed-off-by: Sean Christopherson Message-Id: <20200423022550.15113-13-sean.j.christopherson@intel.com> [Extend to SVM, add SMI and NMI. Even though NMI and SMI cannot come asynchronously right now, making the fix generic is easy and removes a special case. - Paolo] Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/svm/svm.c | 25 ++++++++++++++++++----- arch/x86/kvm/vmx/vmx.c | 17 +++++++++++++--- arch/x86/kvm/x86.c | 36 +++++++++++---------------------- 4 files changed, 49 insertions(+), 35 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index efaddc68a694..7cd68d1d0627 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1139,8 +1139,8 @@ struct kvm_x86_ops { void (*set_nmi)(struct kvm_vcpu *vcpu); void (*queue_exception)(struct kvm_vcpu *vcpu); void (*cancel_injection)(struct kvm_vcpu *vcpu); - bool (*interrupt_allowed)(struct kvm_vcpu *vcpu); - bool (*nmi_allowed)(struct kvm_vcpu *vcpu); + bool (*interrupt_allowed)(struct kvm_vcpu *vcpu, bool for_injection); + bool (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); bool (*get_nmi_mask)(struct kvm_vcpu *vcpu); void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked); void (*enable_nmi_window)(struct kvm_vcpu *vcpu); @@ -1238,7 +1238,7 @@ struct kvm_x86_ops { void (*setup_mce)(struct kvm_vcpu *vcpu); - bool (*smi_allowed)(struct kvm_vcpu *vcpu); + bool (*smi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); int (*pre_enter_smm)(struct kvm_vcpu *vcpu, char *smstate); int (*pre_leave_smm)(struct kvm_vcpu *vcpu, const char *smstate); int (*enable_smi_window)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0bbb2cd24eb7..8f8fc65bfa3e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3080,13 +3080,17 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu) return ret; } -static bool svm_nmi_allowed(struct kvm_vcpu *vcpu) +static bool svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { struct vcpu_svm *svm = to_svm(vcpu); if (svm->nested.nested_run_pending) return false; - return !svm_nmi_blocked(vcpu); + /* An NMI must not be injected into L2 if it's supposed to VM-Exit. */ + if (for_injection && is_guest_mode(vcpu) && nested_exit_on_nmi(svm)) + return false; + + return !svm_nmi_blocked(vcpu); } static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu) @@ -3135,13 +3139,20 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) return (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK); } -static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu) +static bool svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection) { struct vcpu_svm *svm = to_svm(vcpu); if (svm->nested.nested_run_pending) return false; - return !svm_interrupt_blocked(vcpu); + /* + * An IRQ must not be injected into L2 if it's supposed to VM-Exit, + * e.g. if the IRQ arrived asynchronously after checking nested events. + */ + if (for_injection && is_guest_mode(vcpu) && nested_exit_on_intr(svm)) + return false; + + return !svm_interrupt_blocked(vcpu); } static void enable_irq_window(struct kvm_vcpu *vcpu) @@ -3800,12 +3811,16 @@ bool svm_smi_blocked(struct kvm_vcpu *vcpu) return is_smm(vcpu); } -static bool svm_smi_allowed(struct kvm_vcpu *vcpu) +static bool svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { struct vcpu_svm *svm = to_svm(vcpu); if (svm->nested.nested_run_pending) return false; + /* An SMI must not be injected into L2 if it's supposed to VM-Exit. */ + if (for_injection && is_guest_mode(vcpu) && nested_exit_on_smi(svm)) + return false; + return !svm_smi_blocked(vcpu); } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9e2ba1f96cd1..3ab6ca6062ce 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -4524,11 +4524,15 @@ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu) GUEST_INTR_STATE_NMI)); } -static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu) +static bool vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { if (to_vmx(vcpu)->nested.nested_run_pending) return false; + /* An NMI must not be injected into L2 if it's supposed to VM-Exit. */ + if (for_injection && is_guest_mode(vcpu) && nested_exit_on_nmi(vcpu)) + return false; + return !vmx_nmi_blocked(vcpu); } @@ -4542,11 +4546,18 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu) (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); } -static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu) +static bool vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection) { if (to_vmx(vcpu)->nested.nested_run_pending) return false; + /* + * An IRQ must not be injected into L2 if it's supposed to VM-Exit, + * e.g. if the IRQ arrived asynchronously after checking nested events. + */ + if (for_injection && is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) + return false; + return !vmx_interrupt_blocked(vcpu); } @@ -7688,7 +7699,7 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu) ~FEAT_CTL_LMCE_ENABLED; } -static bool vmx_smi_allowed(struct kvm_vcpu *vcpu) +static bool vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { /* we need a nested vmexit to enter SMM, postpone if run is pending */ if (to_vmx(vcpu)->nested.nested_run_pending) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 44b9d834621c..856b6fc2c2ba 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7746,32 +7746,20 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) if (kvm_event_needs_reinjection(vcpu)) return 0; - if (vcpu->arch.smi_pending && kvm_x86_ops.smi_allowed(vcpu)) { + if (vcpu->arch.smi_pending && + kvm_x86_ops.smi_allowed(vcpu, true)) { vcpu->arch.smi_pending = false; ++vcpu->arch.smi_count; enter_smm(vcpu); - } else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) { + } else if (vcpu->arch.nmi_pending && + kvm_x86_ops.nmi_allowed(vcpu, true)) { --vcpu->arch.nmi_pending; vcpu->arch.nmi_injected = true; kvm_x86_ops.set_nmi(vcpu); - } else if (kvm_cpu_has_injectable_intr(vcpu)) { - /* - * Because interrupts can be injected asynchronously, we are - * calling check_nested_events again here to avoid a race condition. - * See https://lkml.org/lkml/2014/7/2/60 for discussion about this - * proposal and current concerns. Perhaps we should be setting - * KVM_REQ_EVENT only on certain events and not unconditionally? - */ - if (is_guest_mode(vcpu)) { - r = kvm_x86_ops.nested_ops->check_events(vcpu); - if (r != 0) - return r; - } - if (kvm_x86_ops.interrupt_allowed(vcpu)) { - kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), - false); - kvm_x86_ops.set_irq(vcpu); - } + } else if (kvm_cpu_has_injectable_intr(vcpu) && + kvm_x86_ops.interrupt_allowed(vcpu, true)) { + kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false); + kvm_x86_ops.set_irq(vcpu); } return 0; @@ -10171,12 +10159,12 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) if (kvm_test_request(KVM_REQ_NMI, vcpu) || (vcpu->arch.nmi_pending && - kvm_x86_ops.nmi_allowed(vcpu))) + kvm_x86_ops.nmi_allowed(vcpu, false))) return true; if (kvm_test_request(KVM_REQ_SMI, vcpu) || (vcpu->arch.smi_pending && - kvm_x86_ops.smi_allowed(vcpu))) + kvm_x86_ops.smi_allowed(vcpu, false))) return true; if (kvm_arch_interrupt_allowed(vcpu) && @@ -10228,7 +10216,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu) { - return kvm_x86_ops.interrupt_allowed(vcpu); + return kvm_x86_ops.interrupt_allowed(vcpu, false); } unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu) @@ -10393,7 +10381,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu) * If interrupts are off we cannot even use an artificial * halt state. */ - return kvm_x86_ops.interrupt_allowed(vcpu); + return kvm_arch_interrupt_allowed(vcpu); } void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,