From patchwork Wed Feb 17 14:57:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6054FC43381 for ; Wed, 17 Feb 2021 14:59:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 265D364E28 for ; Wed, 17 Feb 2021 14:59:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233653AbhBQO7A (ORCPT ); Wed, 17 Feb 2021 09:59:00 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:49233 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232034AbhBQO66 (ORCPT ); Wed, 17 Feb 2021 09:58:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573852; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LN2px05F08WrD3YKLDLusn49XXFwsDBda/Tl6rjbljY=; b=eO/BZyxzJfYj59+l6ZWim8dhogslADYkD/AWUBE6c7kuJmwhoG/rQRWg4baZBRCBEJ9DgR gfFVB8C1sJw+8GFA/L0EIkSs/I6AiW4fFI4sWMJ73DSuKlO2do22hjriFRQWwluyv1q8TJ 92VnTy8tEpz0tN7OXd3cEY1U4D80Zss= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-421-aC41oIWZNXCRCwhLfsdBdg-1; Wed, 17 Feb 2021 09:57:28 -0500 X-MC-Unique: aC41oIWZNXCRCwhLfsdBdg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 80193801979; Wed, 17 Feb 2021 14:57:26 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3226C10016FF; Wed, 17 Feb 2021 14:57:22 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 1/7] KVM: VMX: read idt_vectoring_info a bit earlier Date: Wed, 17 Feb 2021 16:57:12 +0200 Message-Id: <20210217145718.1217358-2-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org trace_kvm_exit prints this value (using vmx_get_exit_info) so it makes sense to read it before the trace point. Fixes: dcf068da7eb2 ("KVM: VMX: Introduce generic fastpath handler") Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/vmx.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b3e36dc3f164..e428d69e21c0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6921,13 +6921,15 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) if (unlikely((u16)vmx->exit_reason.basic == EXIT_REASON_MCE_DURING_VMENTRY)) kvm_machine_check(); + if (likely(!vmx->exit_reason.failed_vmentry)) + vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); + trace_kvm_exit(vmx->exit_reason.full, vcpu, KVM_ISA_VMX); if (unlikely(vmx->exit_reason.failed_vmentry)) return EXIT_FASTPATH_NONE; vmx->loaded_vmcs->launched = 1; - vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); From patchwork Wed Feb 17 14:57:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85435C433E0 for ; Wed, 17 Feb 2021 14:59:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51FC364E2F for ; Wed, 17 Feb 2021 14:59:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233683AbhBQO7G (ORCPT ); Wed, 17 Feb 2021 09:59:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:56112 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233668AbhBQO7C (ORCPT ); Wed, 17 Feb 2021 09:59:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pmkGhGo1TINR1mZXpJI2cXfeqHdr+cz1awM+BWv9cag=; b=HkzNvVL92wY3KCD/f3GAr88CmY5CJzvRAGNLFECPRXIJUsgk0wg7S5DHErsSA1zK+gk6uK tofTfdj4tsC2eXjsAxnWsgSLN2CkNmZiiDLyyEZi98ZFI3cXMGDcNGf0ZvNDBl6PZfTxkF O1nfH20UERU6GK57hb4psH5KTzF1jQA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-27-EtJqtWnSOo2hSz8JdOmofg-1; Wed, 17 Feb 2021 09:57:33 -0500 X-MC-Unique: EtJqtWnSOo2hSz8JdOmofg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3F001107ACF5; Wed, 17 Feb 2021 14:57:30 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id DEDB710023AF; Wed, 17 Feb 2021 14:57:26 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 2/7] KVM: nSVM: move nested vmrun tracepoint to enter_svm_guest_mode Date: Wed, 17 Feb 2021 16:57:13 +0200 Message-Id: <20210217145718.1217358-3-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This way trace will capture all the nested mode entries (including entries after migration, and from smm) Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 519fe84f2100..1bc31e2e8fe0 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -500,6 +500,20 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa, { int ret; + trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb12_gpa, + vmcb12->save.rip, + vmcb12->control.int_ctl, + vmcb12->control.event_inj, + vmcb12->control.nested_ctl); + + trace_kvm_nested_intercepts(vmcb12->control.intercepts[INTERCEPT_CR] & 0xffff, + vmcb12->control.intercepts[INTERCEPT_CR] >> 16, + vmcb12->control.intercepts[INTERCEPT_EXCEPTION], + vmcb12->control.intercepts[INTERCEPT_WORD3], + vmcb12->control.intercepts[INTERCEPT_WORD4], + vmcb12->control.intercepts[INTERCEPT_WORD5]); + + svm->nested.vmcb12_gpa = vmcb12_gpa; WARN_ON(svm->vmcb == svm->nested.vmcb02.ptr); @@ -559,18 +573,6 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) goto out; } - trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb12_gpa, - vmcb12->save.rip, - vmcb12->control.int_ctl, - vmcb12->control.event_inj, - vmcb12->control.nested_ctl); - - trace_kvm_nested_intercepts(vmcb12->control.intercepts[INTERCEPT_CR] & 0xffff, - vmcb12->control.intercepts[INTERCEPT_CR] >> 16, - vmcb12->control.intercepts[INTERCEPT_EXCEPTION], - vmcb12->control.intercepts[INTERCEPT_WORD3], - vmcb12->control.intercepts[INTERCEPT_WORD4], - vmcb12->control.intercepts[INTERCEPT_WORD5]); /* Clear internal status */ kvm_clear_exception_queue(vcpu); From patchwork Wed Feb 17 14:57:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A976FC433DB for ; Wed, 17 Feb 2021 14:59:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72B9964E33 for ; Wed, 17 Feb 2021 14:59:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233691AbhBQO7P (ORCPT ); Wed, 17 Feb 2021 09:59:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:33798 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233671AbhBQO7D (ORCPT ); Wed, 17 Feb 2021 09:59:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ozt+M2IQLovu++En5b4wIhldhTSUP+BsafHMlS5LCBs=; b=EnPRsulpyHQyuGea4KlWEF1cJ3AUGx5QUg9FYjq6AbMbd1y42IPw2SQgmgD8lAFmPgUQeT Qgw6S+R3zIjsU8Gz3WVOwuU+mEOzp2xYKSh2kYrQeSrAff9crAjTNvWmIt1n1oaRK1ukTc ig6pQ0XY0kJhu7PAHGHYgnkupMygtpE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-368-EtTZlzq0NrCm1xywI23tKA-1; Wed, 17 Feb 2021 09:57:35 -0500 X-MC-Unique: EtTZlzq0NrCm1xywI23tKA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F0E8A84E246; Wed, 17 Feb 2021 14:57:33 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9F99810023AF; Wed, 17 Feb 2021 14:57:30 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 3/7] KVM: x86: add .complete_mmu_init arch callback Date: Wed, 17 Feb 2021 16:57:14 +0200 Message-Id: <20210217145718.1217358-4-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This callback will be used to tweak the mmu context in arch specific code after it was reset. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 2 ++ arch/x86/kvm/svm/svm.c | 6 ++++++ arch/x86/kvm/vmx/vmx.c | 6 ++++++ 5 files changed, 17 insertions(+) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 355a2ab8fc09..041e5765dc67 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -86,6 +86,7 @@ KVM_X86_OP(set_tss_addr) KVM_X86_OP(set_identity_map_addr) KVM_X86_OP(get_mt_mask) KVM_X86_OP(load_mmu_pgd) +KVM_X86_OP(complete_mmu_init) KVM_X86_OP_NULL(has_wbinvd_exit) KVM_X86_OP(write_l1_tsc_offset) KVM_X86_OP(get_exit_info) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a8e1b57b1532..01a08f936781 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1251,6 +1251,8 @@ struct kvm_x86_ops { void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, unsigned long pgd, int pgd_level); + void (*complete_mmu_init) (struct kvm_vcpu *vcpu); + bool (*has_wbinvd_exit)(void); /* Returns actual tsc_offset set in active VMCS */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e507568cd55d..00bf9ff2e469 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4774,6 +4774,8 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots) init_kvm_tdp_mmu(vcpu); else init_kvm_softmmu(vcpu); + + static_call(kvm_x86_complete_mmu_init)(vcpu); } EXPORT_SYMBOL_GPL(kvm_init_mmu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 754e07538b4a..74a334c9902a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3913,6 +3913,11 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root, vmcb_mark_dirty(svm->vmcb, VMCB_CR); } +static void svm_complete_mmu_init(struct kvm_vcpu *vcpu) +{ + +} + static int is_disabled(void) { u64 vm_cr; @@ -4522,6 +4527,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .write_l1_tsc_offset = svm_write_l1_tsc_offset, .load_mmu_pgd = svm_load_mmu_pgd, + .complete_mmu_init = svm_complete_mmu_init, .check_intercept = svm_check_intercept, .handle_exit_irqoff = svm_handle_exit_irqoff, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e428d69e21c0..bf6ef674d688 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3252,6 +3252,11 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd, vmcs_writel(GUEST_CR3, guest_cr3); } +static void vmx_complete_mmu_init(struct kvm_vcpu *vcpu) +{ + +} + static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) { /* @@ -7849,6 +7854,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .write_l1_tsc_offset = vmx_write_l1_tsc_offset, .load_mmu_pgd = vmx_load_mmu_pgd, + .complete_mmu_init = vmx_complete_mmu_init, .check_intercept = vmx_check_intercept, .handle_exit_irqoff = vmx_handle_exit_irqoff, From patchwork Wed Feb 17 14:57:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 140ADC433DB for ; Wed, 17 Feb 2021 15:00:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D24C064E42 for ; Wed, 17 Feb 2021 14:59:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233640AbhBQO72 (ORCPT ); Wed, 17 Feb 2021 09:59:28 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:55819 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233685AbhBQO7J (ORCPT ); Wed, 17 Feb 2021 09:59:09 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573863; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v7QxUNvE1TTQeAPHwRI8Q0OyR14m+5XWMXhLRlivKxs=; b=EppWLTyw1YH5WYgwaPLH4eKj31HKvu6durGMpLra7mMqUJ2ysYHwGxgadOtAF0I+eB3HPF TzFADT9ywJKyIZy2xYf5JvHYcRWuv4YHRYZqDQICBDb1E0oY5foWcL5V3m0R4/pfHxE860 AoHoll567fZPBCnk1t25GdEhOnGXb70= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-196-ogPkOPpgNTWAMjjETuyzBQ-1; Wed, 17 Feb 2021 09:57:39 -0500 X-MC-Unique: ogPkOPpgNTWAMjjETuyzBQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ACFC21005501; Wed, 17 Feb 2021 14:57:37 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5C81410023AF; Wed, 17 Feb 2021 14:57:34 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 4/7] KVM: nVMX: move inject_page_fault tweak to .complete_mmu_init Date: Wed, 17 Feb 2021 16:57:15 +0200 Message-Id: <20210217145718.1217358-5-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This fixes a (mostly theoretical) bug which can happen if ept=0 on host and we run a nested guest which triggers a mmu context reset while running nested. In this case the .inject_page_fault callback will be lost. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/nested.c | 8 +------- arch/x86/kvm/vmx/nested.h | 1 + arch/x86/kvm/vmx/vmx.c | 5 ++++- 3 files changed, 6 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 0b6dab6915a3..f9de729dbea6 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -419,7 +419,7 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit } -static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, +void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); @@ -2620,9 +2620,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3); } - if (!enable_ept) - vcpu->arch.walk_mmu->inject_page_fault = vmx_inject_page_fault_nested; - if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) && WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, vmcs12->guest_ia32_perf_global_ctrl))) @@ -4224,9 +4221,6 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, if (nested_vmx_load_cr3(vcpu, vmcs12->host_cr3, false, &ignored)) nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_PDPTE_FAIL); - if (!enable_ept) - vcpu->arch.walk_mmu->inject_page_fault = kvm_inject_page_fault; - nested_vmx_transition_tlb_flush(vcpu, vmcs12, false); vmcs_write32(GUEST_SYSENTER_CS, vmcs12->host_ia32_sysenter_cs); diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index 197148d76b8f..2ab279744d38 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -36,6 +36,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu); void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *vcpu); bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port, int size); +void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu,struct x86_exception *fault); static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bf6ef674d688..c43324df4877 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3254,7 +3254,10 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd, static void vmx_complete_mmu_init(struct kvm_vcpu *vcpu) { - + if (!enable_ept && is_guest_mode(vcpu)) { + WARN_ON(mmu_is_nested(vcpu)); + vcpu->arch.mmu->inject_page_fault = vmx_inject_page_fault_nested; + } } static bool vmx_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) From patchwork Wed Feb 17 14:57:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BB6EC433E0 for ; Wed, 17 Feb 2021 15:00:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED25C64E33 for ; Wed, 17 Feb 2021 14:59:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233671AbhBQO7l (ORCPT ); Wed, 17 Feb 2021 09:59:41 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:53164 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233688AbhBQO7K (ORCPT ); Wed, 17 Feb 2021 09:59:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573864; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cXC9PCMpIbRFpZekIm74aFA90Ezuln3DIDaR/mC8t7U=; b=ZeqArFTCWC/TMilOASZG3vJPu/jmbpUoDaPlu14cwFbcS3sjrkvO4BdrnkM2OakTPhlJdD kGzq7tcCagpYNW3Ez56rml53KwbTYeQus5U5nk+vYOWVUhAN1KAYZ6PrVq8afArUB8ktu4 KAskvXihtc6VEV+uzuYQ6grFtKGBxkQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-442-okOqrkgtNyG8-LCW3kbhLg-1; Wed, 17 Feb 2021 09:57:43 -0500 X-MC-Unique: okOqrkgtNyG8-LCW3kbhLg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7EC82CE64F; Wed, 17 Feb 2021 14:57:41 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 18B0A10023AF; Wed, 17 Feb 2021 14:57:37 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 5/7] KVM: nSVM: fix running nested guests when npt=0 Date: Wed, 17 Feb 2021 16:57:16 +0200 Message-Id: <20210217145718.1217358-6-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In case of npt=0 on host, nSVM needs the same .inject_page_fault tweak as VMX has, to make sure that shadow mmu faults are injected as vmexits. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 18 ++++++++++++++++++ arch/x86/kvm/svm/svm.c | 5 ++++- arch/x86/kvm/svm/svm.h | 1 + 3 files changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 1bc31e2e8fe0..53b9037259b5 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -53,6 +53,23 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, nested_svm_vmexit(svm); } +void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault) +{ + struct vcpu_svm *svm = to_svm(vcpu); + WARN_ON(!is_guest_mode(vcpu)); + + if (vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) && + !svm->nested.nested_run_pending) { + svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR; + svm->vmcb->control.exit_code_hi = 0; + svm->vmcb->control.exit_info_1 = fault->error_code; + svm->vmcb->control.exit_info_2 = fault->address; + nested_svm_vmexit(svm); + } else { + kvm_inject_page_fault(vcpu, fault); + } +} + static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) { struct vcpu_svm *svm = to_svm(vcpu); @@ -531,6 +548,7 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa, if (ret) return ret; + svm_set_gif(svm, true); return 0; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 74a334c9902a..59e1767df030 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3915,7 +3915,10 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root, static void svm_complete_mmu_init(struct kvm_vcpu *vcpu) { - + if (!npt_enabled && is_guest_mode(vcpu)) { + WARN_ON(mmu_is_nested(vcpu)); + vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested; + } } static int is_disabled(void) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 7b6ca0e49a14..fda80d56c6e3 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -437,6 +437,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI); } +void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault); int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *vmcb12); void svm_leave_nested(struct vcpu_svm *svm); void svm_free_nested(struct vcpu_svm *svm); From patchwork Wed Feb 17 14:57:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 523B7C433E9 for ; Wed, 17 Feb 2021 15:00:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 168E664E62 for ; Wed, 17 Feb 2021 15:00:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233704AbhBQO7p (ORCPT ); Wed, 17 Feb 2021 09:59:45 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:37361 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233605AbhBQO7P (ORCPT ); Wed, 17 Feb 2021 09:59:15 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573869; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d7hdrmggGz0IX+vXiyJwgHKownPeIRNtGuiBFLCAaUw=; b=Z3oxzDAsRXXEpRBtPkaQ1r0xFv7T4U0LSS0B5Rk3ncNq1INyFDDjhlkNw67Tgd+EHF/UST xJ/93FrDDXCXOtlJp9HZpQKe20yZZNCC6kD57736E+o8q7AT1scFen7ZuRmmvdHeW1XTTl NPhPH7+j/bDQm/nCEgn7wrBycblG5MY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-316-0hpgskSyOPycxJdIbXKYSA-1; Wed, 17 Feb 2021 09:57:46 -0500 X-MC-Unique: 0hpgskSyOPycxJdIbXKYSA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3F9BB192AB78; Wed, 17 Feb 2021 14:57:45 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id DE8BB10023AF; Wed, 17 Feb 2021 14:57:41 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 6/7] KVM: nVMX: don't load PDPTRS right after nested state set Date: Wed, 17 Feb 2021 16:57:17 +0200 Message-Id: <20210217145718.1217358-7-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Just like all other nested memory accesses, after a migration loading PDPTRs should be delayed to first VM entry to ensure that guest memory is fully initialized. Just move the call to nested_vmx_load_cr3 to nested_get_vmcs12_pages to implement this. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/nested.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index f9de729dbea6..26084f8eee82 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2596,11 +2596,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, return -EINVAL; } - /* Shadow page tables on either EPT or shadow page tables. */ - if (nested_vmx_load_cr3(vcpu, vmcs12->guest_cr3, nested_cpu_has_ept(vmcs12), - entry_failure_code)) - return -EINVAL; - /* * Immediately write vmcs02.GUEST_CR3. It will be propagated to vmcs12 * on nested VM-Exit, which can occur without actually running L2 and @@ -3138,11 +3133,16 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu) static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + enum vm_entry_failure_code entry_failure_code; struct vcpu_vmx *vmx = to_vmx(vcpu); struct kvm_host_map *map; struct page *page; u64 hpa; + if (nested_vmx_load_cr3(vcpu, vmcs12->guest_cr3, nested_cpu_has_ept(vmcs12), + &entry_failure_code)) + return false; + if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { /* * Translate L1 physical address to host physical @@ -3386,6 +3386,10 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, } if (from_vmentry) { + if (nested_vmx_load_cr3(vcpu, vmcs12->guest_cr3, + nested_cpu_has_ept(vmcs12), &entry_failure_code)) + goto vmentry_fail_vmexit_guest_mode; + failed_index = nested_vmx_load_msr(vcpu, vmcs12->vm_entry_msr_load_addr, vmcs12->vm_entry_msr_load_count); From patchwork Wed Feb 17 14:57:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12091757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A989C43381 for ; Wed, 17 Feb 2021 15:00:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BE0464E6F for ; Wed, 17 Feb 2021 15:00:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233707AbhBQO7r (ORCPT ); Wed, 17 Feb 2021 09:59:47 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40374 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233680AbhBQO7S (ORCPT ); Wed, 17 Feb 2021 09:59:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613573872; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D6vYOR0dgL/biSGZfHd88o0jtjehBMmGld8Fzmh3jxI=; b=HFoG+g50Wc6wVI7HT0v79J7yurVGrkEzhUJW5zZxklzZXmuPemMM2kaS34XiFS/WQdlObz EeG14pDKOGc4bmiAKVqvv3PSRY6Wg0Gu/f9ebB3smt4houok1l7RxsJ1GJo08R72wAmslP IXoQYGmWzjSEhBaTwonft/gdSHU7xFU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-3-ABdhTXJ3NLuaTxT6sZ4Pvw-1; Wed, 17 Feb 2021 09:57:50 -0500 X-MC-Unique: ABdhTXJ3NLuaTxT6sZ4Pvw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F268D192AB78; Wed, 17 Feb 2021 14:57:48 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9EB7A10023AF; Wed, 17 Feb 2021 14:57:45 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Wanpeng Li , Borislav Petkov , Paolo Bonzini , Joerg Roedel , Jim Mattson , "H. Peter Anvin" , Sean Christopherson , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Thomas Gleixner , Vitaly Kuznetsov , Ingo Molnar , Maxim Levitsky Subject: [PATCH 7/7] KVM: nSVM: call nested_svm_load_cr3 on nested state load Date: Wed, 17 Feb 2021 16:57:18 +0200 Message-Id: <20210217145718.1217358-8-mlevitsk@redhat.com> In-Reply-To: <20210217145718.1217358-1-mlevitsk@redhat.com> References: <20210217145718.1217358-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org While KVM's MMU should be fully reset by loading of nested CR0/CR3/CR4 by KVM_SET_SREGS, we are not in nested mode yet when we do it and therefore only root_mmu is reset. On regular nested entries we call nested_svm_load_cr3 which both updates the guest's CR3 in the MMU when it is needed, and it also initializes the mmu again which makes it initialize the walk_mmu as well when nested paging is enabled in both host and guest. Since we don't call nested_svm_load_cr3 on nested state load, the walk_mmu can be left uninitialized, which can lead to a NULL pointer dereference while accessing it, if we happen to get a nested page fault right after entering the nested guest first time after the migration and if we decide to emulate it. This makes the emulator access NULL walk_mmu->gva_to_gpa. Therefore we should call this function on nested state load as well. Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 40 +++++++++++++++++++++------------------ 1 file changed, 22 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 53b9037259b5..ebc7dfaa9f13 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -215,24 +215,6 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) return true; } -static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu) -{ - struct vcpu_svm *svm = to_svm(vcpu); - - if (WARN_ON(!is_guest_mode(vcpu))) - return true; - - if (!nested_svm_vmrun_msrpm(svm)) { - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror = - KVM_INTERNAL_ERROR_EMULATION; - vcpu->run->internal.ndata = 0; - return false; - } - - return true; -} - static bool nested_vmcb_check_controls(struct vmcb_control_area *control) { if (CC(!vmcb_is_intercept(control, INTERCEPT_VMRUN))) @@ -1311,6 +1293,28 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, return ret; } +static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + if (WARN_ON(!is_guest_mode(vcpu))) + return true; + + if (nested_svm_load_cr3(&svm->vcpu, vcpu->arch.cr3, + nested_npt_enabled(svm))) + return false; + + if (!nested_svm_vmrun_msrpm(svm)) { + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + vcpu->run->internal.suberror = + KVM_INTERNAL_ERROR_EMULATION; + vcpu->run->internal.ndata = 0; + return false; + } + + return true; +} + struct kvm_x86_nested_ops svm_nested_ops = { .check_events = svm_check_nested_events, .get_nested_state_pages = svm_get_nested_state_pages,