From patchwork Thu Aug 20 09:13:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725947 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52F2E14F6 for ; Thu, 20 Aug 2020 09:13:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 367272078D for ; Thu, 20 Aug 2020 09:13:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OvSaWWD7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726872AbgHTJNu (ORCPT ); Thu, 20 Aug 2020 05:13:50 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:51945 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726723AbgHTJNn (ORCPT ); Thu, 20 Aug 2020 05:13:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZcRR+zhuRSPQz3y/P4w6mA/wG4YpwsNTibjI7+ECMsI=; b=OvSaWWD7xj8YvzyQD0P7ebQwjDCMEyYNIEt9ACIt8F7mGqRbfAF7fJBe1UdCNh0gR4HEgq 6DlXhrjOOzED9TVbA5cy700Bj9l3UjyuZg2CuIVEYhB4U8ih63hvALYy2gX9IAiWJiuNCK /N0aEBxDGzoXzJsebivZRsztlYBoWec= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-355-dQH393HBP0y4UCERGNaR3w-1; Thu, 20 Aug 2020 05:13:38 -0400 X-MC-Unique: dQH393HBP0y4UCERGNaR3w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EC759801A9E; Thu, 20 Aug 2020 09:13:36 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 257C1747B0; Thu, 20 Aug 2020 09:13:32 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 1/8] KVM: SVM: rename a variable in the svm_create_vcpu Date: Thu, 20 Aug 2020 12:13:20 +0300 Message-Id: <20200820091327.197807-2-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The 'page' is to hold the vcpu's vmcb so name it as such to avoid confusion. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 03dd7bac8034..562a79e3e63a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1171,7 +1171,7 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) static int svm_create_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm; - struct page *page; + struct page *vmcb_page; struct page *msrpm_pages; struct page *hsave_page; struct page *nested_msrpm_pages; @@ -1181,8 +1181,8 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm = to_svm(vcpu); err = -ENOMEM; - page = alloc_page(GFP_KERNEL_ACCOUNT); - if (!page) + vmcb_page = alloc_page(GFP_KERNEL_ACCOUNT); + if (!vmcb_page) goto out; msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); @@ -1216,9 +1216,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->nested.msrpm = page_address(nested_msrpm_pages); svm_vcpu_init_msrpm(svm->nested.msrpm); - svm->vmcb = page_address(page); + svm->vmcb = page_address(vmcb_page); clear_page(svm->vmcb); - svm->vmcb_pa = __sme_set(page_to_pfn(page) << PAGE_SHIFT); + svm->vmcb_pa = __sme_set(page_to_pfn(vmcb_page) << PAGE_SHIFT); svm->asid_generation = 0; init_vmcb(svm); @@ -1234,7 +1234,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) free_page2: __free_pages(msrpm_pages, MSRPM_ALLOC_ORDER); free_page1: - __free_page(page); + __free_page(vmcb_page); out: return err; } From patchwork Thu Aug 20 09:13:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B31AC722 for ; Thu, 20 Aug 2020 09:16:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 91560207FB for ; Thu, 20 Aug 2020 09:16:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="T97tt5Qs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726911AbgHTJP5 (ORCPT ); Thu, 20 Aug 2020 05:15:57 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:27516 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726858AbgHTJNp (ORCPT ); Thu, 20 Aug 2020 05:13:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914824; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XTgJpN7YBqdgGpVTTMCcG6iRYP305urLEHU/3AJMw1s=; b=T97tt5QszDK+e2gW4/dETxCbzwRp+ckWqXkJHdL6zYaJEzoOSiTsxwGVm+FwEFW3xnra7F TXsYmo64nlLI6GQFfyDXSNzoJeuqnlVrWEPMKH7z/G17AMQuXN8ShcaYFPmzFvafIhi3Ap sPitRa1MrZT3YvtEzNKzK7YaR1N1dH4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-245-RrrfPBadO56fKwzoLAQwrQ-1; Thu, 20 Aug 2020 05:13:42 -0400 X-MC-Unique: RrrfPBadO56fKwzoLAQwrQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BD494801AAF; Thu, 20 Aug 2020 09:13:40 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5B4E5747B0; Thu, 20 Aug 2020 09:13:37 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 2/8] KVM: nSVM: rename nested 'vmcb' to vmcb_gpa in few places Date: Thu, 20 Aug 2020 12:13:21 +0300 Message-Id: <20200820091327.197807-3-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No functional changes. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 10 +++++----- arch/x86/kvm/svm/svm.c | 13 +++++++------ arch/x86/kvm/svm/svm.h | 2 +- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index fb68467e6049..d9755eab2199 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -431,7 +431,7 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, { int ret; - svm->nested.vmcb = vmcb_gpa; + svm->nested.vmcb_gpa = vmcb_gpa; load_nested_vmcb_control(svm, &nested_vmcb->control); nested_prepare_vmcb_save(svm, nested_vmcb); nested_prepare_vmcb_control(svm); @@ -568,7 +568,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) struct vmcb *vmcb = svm->vmcb; struct kvm_host_map map; - rc = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->nested.vmcb), &map); + rc = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->nested.vmcb_gpa), &map); if (rc) { if (rc == -EINVAL) kvm_inject_gp(&svm->vcpu, 0); @@ -579,7 +579,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) /* Exit Guest-Mode */ leave_guest_mode(&svm->vcpu); - svm->nested.vmcb = 0; + svm->nested.vmcb_gpa = 0; WARN_ON_ONCE(svm->nested.nested_run_pending); /* in case we halted in L2 */ @@ -1018,7 +1018,7 @@ static int svm_get_nested_state(struct kvm_vcpu *vcpu, /* First fill in the header and copy it out. */ if (is_guest_mode(vcpu)) { - kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb; + kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb_gpa; kvm_state.size += KVM_STATE_NESTED_SVM_VMCB_SIZE; kvm_state.flags |= KVM_STATE_NESTED_GUEST_MODE; @@ -1128,7 +1128,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, copy_vmcb_control_area(&hsave->control, &svm->vmcb->control); hsave->save = save; - svm->nested.vmcb = kvm_state->hdr.svm.vmcb_pa; + svm->nested.vmcb_gpa = kvm_state->hdr.svm.vmcb_pa; load_nested_vmcb_control(svm, &ctl); nested_prepare_vmcb_control(svm); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 562a79e3e63a..4338d2a2596e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1102,7 +1102,7 @@ static void init_vmcb(struct vcpu_svm *svm) } svm->asid_generation = 0; - svm->nested.vmcb = 0; + svm->nested.vmcb_gpa = 0; svm->vcpu.arch.hflags = 0; if (!kvm_pause_in_guest(svm->vcpu.kvm)) { @@ -3884,7 +3884,7 @@ static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) /* FED8h - SVM Guest */ put_smstate(u64, smstate, 0x7ed8, 1); /* FEE0h - SVM Guest VMCB Physical Address */ - put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb); + put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb_gpa); svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; @@ -3903,17 +3903,18 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) struct vmcb *nested_vmcb; struct kvm_host_map map; u64 guest; - u64 vmcb; + u64 vmcb_gpa; int ret = 0; guest = GET_SMSTATE(u64, smstate, 0x7ed8); - vmcb = GET_SMSTATE(u64, smstate, 0x7ee0); + vmcb_gpa = GET_SMSTATE(u64, smstate, 0x7ee0); if (guest) { - if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb), &map) == -EINVAL) + if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL) return 1; + nested_vmcb = map.hva; - ret = enter_svm_guest_mode(svm, vmcb, nested_vmcb); + ret = enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb); kvm_vcpu_unmap(&svm->vcpu, &map, true); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index a798e1731709..03f2f082ef10 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -85,7 +85,7 @@ struct svm_nested_state { struct vmcb *hsave; u64 hsave_msr; u64 vm_cr_msr; - u64 vmcb; + u64 vmcb_gpa; u32 host_intercept_exceptions; /* These are the merged vectors */ From patchwork Thu Aug 20 09:13:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725949 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D658214F6 for ; Thu, 20 Aug 2020 09:13:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9FB62078D for ; Thu, 20 Aug 2020 09:13:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="W0owkLxe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726387AbgHTJN5 (ORCPT ); Thu, 20 Aug 2020 05:13:57 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:57040 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726816AbgHTJNt (ORCPT ); Thu, 20 Aug 2020 05:13:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914828; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uSob+rKVUvO19l6gKXUXMaxMphnCEdWrm7lz8bngft0=; b=W0owkLxe2nZGPdmLQx2a1Uwxr7qyGIixr4u3/de9i7gZYC7l5we1QlSviXnz3ixkkAb8Ju V26gClPwrghhy0Levlf0rK8vSIUjQCt6TfhSRtfz4R06305wYRWuQYllhWikvkGPx+/XAC mRbTHLeJpsCAQ4U47zxFq9GWbFbhxnc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-465-_-D0NhNgPNa5PIeqpatExA-1; Thu, 20 Aug 2020 05:13:46 -0400 X-MC-Unique: _-D0NhNgPNa5PIeqpatExA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D0BF11074644; Thu, 20 Aug 2020 09:13:44 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2BC46747B0; Thu, 20 Aug 2020 09:13:40 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 3/8] KVM: SVM: refactor msr permission bitmap allocation Date: Thu, 20 Aug 2020 12:13:22 +0300 Message-Id: <20200820091327.197807-4-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace svm_vcpu_init_msrpm with svm_vcpu_alloc_msrpm, that also allocates the msr bitmap and add svm_vcpu_free_msrpm to free it. This will be used later to move the nested msr permission bitmap allocation to nested.c No functional change intended. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 45 +++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4338d2a2596e..e072ecace466 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -609,18 +609,29 @@ static void set_msr_interception(u32 *msrpm, unsigned msr, msrpm[offset] = tmp; } -static void svm_vcpu_init_msrpm(u32 *msrpm) +static u32 *svm_vcpu_alloc_msrpm(void) { int i; + u32 *msrpm; + struct page *pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); + + if (!pages) + return NULL; + msrpm = page_address(pages); memset(msrpm, 0xff, PAGE_SIZE * (1 << MSRPM_ALLOC_ORDER)); for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { if (!direct_access_msrs[i].always) continue; - set_msr_interception(msrpm, direct_access_msrs[i].index, 1, 1); } + return msrpm; +} + +static void svm_vcpu_free_msrpm(u32 *msrpm) +{ + __free_pages(virt_to_page(msrpm), MSRPM_ALLOC_ORDER); } static void add_msr_offset(u32 offset) @@ -1172,9 +1183,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm; struct page *vmcb_page; - struct page *msrpm_pages; struct page *hsave_page; - struct page *nested_msrpm_pages; int err; BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0); @@ -1185,21 +1194,13 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (!vmcb_page) goto out; - msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); - if (!msrpm_pages) - goto free_page1; - - nested_msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); - if (!nested_msrpm_pages) - goto free_page2; - hsave_page = alloc_page(GFP_KERNEL_ACCOUNT); if (!hsave_page) - goto free_page3; + goto free_page1; err = avic_init_vcpu(svm); if (err) - goto free_page4; + goto free_page2; /* We initialize this flag to true to make sure that the is_running * bit would be set the first time the vcpu is loaded. @@ -1210,11 +1211,13 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->nested.hsave = page_address(hsave_page); clear_page(svm->nested.hsave); - svm->msrpm = page_address(msrpm_pages); - svm_vcpu_init_msrpm(svm->msrpm); + svm->msrpm = svm_vcpu_alloc_msrpm(); + if (!svm->msrpm) + goto free_page2; - svm->nested.msrpm = page_address(nested_msrpm_pages); - svm_vcpu_init_msrpm(svm->nested.msrpm); + svm->nested.msrpm = svm_vcpu_alloc_msrpm(); + if (!svm->nested.msrpm) + goto free_page3; svm->vmcb = page_address(vmcb_page); clear_page(svm->vmcb); @@ -1227,12 +1230,10 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) return 0; -free_page4: - __free_page(hsave_page); free_page3: - __free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER); + svm_vcpu_free_msrpm(svm->msrpm); free_page2: - __free_pages(msrpm_pages, MSRPM_ALLOC_ORDER); + __free_page(hsave_page); free_page1: __free_page(vmcb_page); out: From patchwork Thu Aug 20 09:13:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725955 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EB6614F6 for ; Thu, 20 Aug 2020 09:14:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 676BB21744 for ; Thu, 20 Aug 2020 09:14:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YvisMqVc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726132AbgHTJOW (ORCPT ); Thu, 20 Aug 2020 05:14:22 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:34967 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726879AbgHTJNx (ORCPT ); Thu, 20 Aug 2020 05:13:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gnrUIuS9ipE7Ua20iou4duM5ior6ZWMhgC7nhTZErqA=; b=YvisMqVc30e3igg35fxnMKHeS1DGVwoKLkQJmBpreOafsTRjf6YwtX9aqWxUzkZH3wNZbF 0B1pE5LpgY0zpFmejXTwfSo0V3qTHlw4dQJOy92UaRVjlPTBF1xQeowP3g0Mq0P2XrTDmG 4yUYxqgWrWeZ1MeYsJw9iFCr/Rvriso= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-127-EemCBkxKNOCEx5gtwitqjA-1; Thu, 20 Aug 2020 05:13:50 -0400 X-MC-Unique: EemCBkxKNOCEx5gtwitqjA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A117F1006701; Thu, 20 Aug 2020 09:13:48 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3F8E9747B0; Thu, 20 Aug 2020 09:13:45 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 4/8] KVM: x86: allow kvm_x86_ops.set_efer to return a value Date: Thu, 20 Aug 2020 12:13:23 +0300 Message-Id: <20200820091327.197807-5-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be used later to return an error when setting this msr fails. For VMX, it already has an error condition when EFER is not in the shared MSR list, so return an error in this case. Signed-off-by: Maxim Levitsky Reported-by: kernel test robot --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/vmx/vmx.c | 5 +++-- arch/x86/kvm/x86.c | 3 ++- 5 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5ab3af7275d8..bd0519e26053 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1069,7 +1069,7 @@ struct kvm_x86_ops { void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l); void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); int (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4); - void (*set_efer)(struct kvm_vcpu *vcpu, u64 efer); + int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer); void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e072ecace466..ce0773c9a7fa 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -263,7 +263,7 @@ static int get_max_npt_level(void) #endif } -void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) +int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_svm *svm = to_svm(vcpu); vcpu->arch.efer = efer; @@ -283,6 +283,7 @@ void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) svm->vmcb->save.efer = efer | EFER_SVME; vmcb_mark_dirty(svm->vmcb, VMCB_CR); + return 0; } static int is_external_interrupt(u32 info) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 03f2f082ef10..ef16f708ed1c 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -349,7 +349,7 @@ static inline bool gif_set(struct vcpu_svm *svm) #define MSR_INVALID 0xffffffffU u32 svm_msrpm_offset(u32 msr); -void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); +int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); void svm_flush_tlb(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 46ba2e03a892..e90b9e68c7ea 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2862,13 +2862,13 @@ static void enter_rmode(struct kvm_vcpu *vcpu) kvm_mmu_reset_context(vcpu); } -void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) +int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_vmx *vmx = to_vmx(vcpu); struct shared_msr_entry *msr = find_msr_entry(vmx, MSR_EFER); if (!msr) - return; + return 1; vcpu->arch.efer = efer; if (efer & EFER_LMA) { @@ -2880,6 +2880,7 @@ void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) msr->data = efer & ~EFER_LME; } setup_msrs(vmx); + return 0; } #ifdef CONFIG_X86_64 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2db369a64f29..cad5d9778a21 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1471,7 +1471,8 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info) efer &= ~EFER_LMA; efer |= vcpu->arch.efer & EFER_LMA; - kvm_x86_ops.set_efer(vcpu, efer); + if (kvm_x86_ops.set_efer(vcpu, efer)) + return 1; /* Update reserved bits */ if ((efer ^ old_efer) & EFER_NX) From patchwork Thu Aug 20 09:13:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725969 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9684E722 for ; Thu, 20 Aug 2020 09:15:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A2BA2173E for ; Thu, 20 Aug 2020 09:15:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="adhXtOlK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726347AbgHTJPx (ORCPT ); Thu, 20 Aug 2020 05:15:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:48766 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726905AbgHTJOA (ORCPT ); Thu, 20 Aug 2020 05:14:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914837; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3WcKnTyZxsyoHj0AlidHgW+F6NjI0StiuW+cJ3VkHhE=; b=adhXtOlKbybKQWy7FPTWgND8wtvG9emhhxlU5x5m8+DYrKfRymxphBJbGSvPAhkRw14Z6K QoDpkkl4IK0ARaxYCquCjU5BbUtrBy8QCJfyfyBbRWRNSuc2FJ8xk4bWkK4QhTw5xiuJAk UYwuns83GXgVaU3N6r+KJ9/dElceePs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-302-7gEpQJzhOOm8FtDJnzc1DQ-1; Thu, 20 Aug 2020 05:13:53 -0400 X-MC-Unique: 7gEpQJzhOOm8FtDJnzc1DQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 718411074642; Thu, 20 Aug 2020 09:13:52 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0FA1B747B0; Thu, 20 Aug 2020 09:13:48 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 5/8] KVM: nSVM: implement ondemand allocation of the nested state Date: Thu, 20 Aug 2020 12:13:24 +0300 Message-Id: <20200820091327.197807-6-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This way we don't waste memory on VMs which don't enable nesting virtualization Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 43 +++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 62 +++++++++++++++++++++++---------------- arch/x86/kvm/svm/svm.h | 6 ++++ 3 files changed, 85 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index d9755eab2199..b6704611fc02 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -473,6 +473,9 @@ int nested_svm_vmrun(struct vcpu_svm *svm) nested_vmcb = map.hva; + if (WARN_ON(!svm->nested.initialized)) + return 1; + if (!nested_vmcb_checks(svm, nested_vmcb)) { nested_vmcb->control.exit_code = SVM_EXIT_ERR; nested_vmcb->control.exit_code_hi = 0; @@ -686,6 +689,46 @@ int nested_svm_vmexit(struct vcpu_svm *svm) return 0; } +int svm_allocate_nested(struct vcpu_svm *svm) +{ + struct page *hsave_page; + + if (svm->nested.initialized) + return 0; + + hsave_page = alloc_page(GFP_KERNEL_ACCOUNT); + if (!hsave_page) + goto free_page1; + + svm->nested.hsave = page_address(hsave_page); + clear_page(svm->nested.hsave); + + svm->nested.msrpm = svm_vcpu_alloc_msrpm(); + if (!svm->nested.msrpm) + goto free_page2; + + svm->nested.initialized = true; + return 0; +free_page2: + __free_page(hsave_page); +free_page1: + return 1; +} + +void svm_free_nested(struct vcpu_svm *svm) +{ + if (!svm->nested.initialized) + return; + + svm_vcpu_free_msrpm(svm->nested.msrpm); + svm->nested.msrpm = NULL; + + __free_page(virt_to_page(svm->nested.hsave)); + svm->nested.hsave = NULL; + + svm->nested.initialized = false; +} + /* * Forcibly leave nested mode in order to be able to reset the VCPU later on. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ce0773c9a7fa..d941acc36b50 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -266,6 +266,7 @@ static int get_max_npt_level(void) int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_svm *svm = to_svm(vcpu); + u64 old_efer = vcpu->arch.efer; vcpu->arch.efer = efer; if (!npt_enabled) { @@ -276,14 +277,31 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) efer &= ~EFER_LME; } - if (!(efer & EFER_SVME)) { - svm_leave_nested(svm); - svm_set_gif(svm, true); + if ((old_efer & EFER_SVME) != (efer & EFER_SVME)) { + if (!(efer & EFER_SVME)) { + svm_leave_nested(svm); + svm_set_gif(svm, true); + + /* + * Free the nested state unless we are in SMM, in which + * case the exit from SVM mode is only for duration of the SMI + * handler + */ + if (!is_smm(&svm->vcpu)) + svm_free_nested(svm); + + } else { + if (svm_allocate_nested(svm)) + goto error; + } } svm->vmcb->save.efer = efer | EFER_SVME; vmcb_mark_dirty(svm->vmcb, VMCB_CR); return 0; +error: + vcpu->arch.efer = old_efer; + return 1; } static int is_external_interrupt(u32 info) @@ -610,7 +628,7 @@ static void set_msr_interception(u32 *msrpm, unsigned msr, msrpm[offset] = tmp; } -static u32 *svm_vcpu_alloc_msrpm(void) +u32 *svm_vcpu_alloc_msrpm(void) { int i; u32 *msrpm; @@ -630,7 +648,7 @@ static u32 *svm_vcpu_alloc_msrpm(void) return msrpm; } -static void svm_vcpu_free_msrpm(u32 *msrpm) +void svm_vcpu_free_msrpm(u32 *msrpm) { __free_pages(virt_to_page(msrpm), MSRPM_ALLOC_ORDER); } @@ -1184,7 +1202,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm; struct page *vmcb_page; - struct page *hsave_page; int err; BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0); @@ -1195,13 +1212,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (!vmcb_page) goto out; - hsave_page = alloc_page(GFP_KERNEL_ACCOUNT); - if (!hsave_page) - goto free_page1; - err = avic_init_vcpu(svm); if (err) - goto free_page2; + goto out; /* We initialize this flag to true to make sure that the is_running * bit would be set the first time the vcpu is loaded. @@ -1209,16 +1222,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (irqchip_in_kernel(vcpu->kvm) && kvm_apicv_activated(vcpu->kvm)) svm->avic_is_running = true; - svm->nested.hsave = page_address(hsave_page); - clear_page(svm->nested.hsave); - svm->msrpm = svm_vcpu_alloc_msrpm(); if (!svm->msrpm) - goto free_page2; - - svm->nested.msrpm = svm_vcpu_alloc_msrpm(); - if (!svm->nested.msrpm) - goto free_page3; + goto free_page; svm->vmcb = page_address(vmcb_page); clear_page(svm->vmcb); @@ -1231,11 +1237,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) return 0; -free_page3: - svm_vcpu_free_msrpm(svm->msrpm); -free_page2: - __free_page(hsave_page); -free_page1: +free_page: __free_page(vmcb_page); out: return err; @@ -1260,10 +1262,10 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) */ svm_clear_current_vmcb(svm->vmcb); + svm_free_nested(svm); + __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); - __free_page(virt_to_page(svm->nested.hsave)); - __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); } static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) @@ -3912,6 +3914,14 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) vmcb_gpa = GET_SMSTATE(u64, smstate, 0x7ee0); if (guest) { + /* + * This can happen if SVM was not enabled prior to #SMI, + * but guest corrupted the #SMI state and marked it as + * enabled it there + */ + if (!svm->nested.initialized) + return 1; + if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL) return 1; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index ef16f708ed1c..9dca64a2edb5 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -97,6 +97,8 @@ struct svm_nested_state { /* cache for control fields of the guest */ struct vmcb_control_area ctl; + + bool initialized; }; struct vcpu_svm { @@ -349,6 +351,8 @@ static inline bool gif_set(struct vcpu_svm *svm) #define MSR_INVALID 0xffffffffU u32 svm_msrpm_offset(u32 msr); +u32 *svm_vcpu_alloc_msrpm(void); +void svm_vcpu_free_msrpm(u32 *msrpm); int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); @@ -390,6 +394,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb); void svm_leave_nested(struct vcpu_svm *svm); +void svm_free_nested(struct vcpu_svm *svm); +int svm_allocate_nested(struct vcpu_svm *svm); int nested_svm_vmrun(struct vcpu_svm *svm); void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); int nested_svm_vmexit(struct vcpu_svm *svm); From patchwork Thu Aug 20 09:13:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725957 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D71E214F6 for ; Thu, 20 Aug 2020 09:14:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B009F2078D for ; Thu, 20 Aug 2020 09:14:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LshiRgxX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726752AbgHTJOl (ORCPT ); Thu, 20 Aug 2020 05:14:41 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:24041 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726309AbgHTJOP (ORCPT ); Thu, 20 Aug 2020 05:14:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914852; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Peu0So29goC0kwU0oEhiX4gY9t6XgKmP71nwrfXlKus=; b=LshiRgxXTjX/o2P9s3WGn5n886knMPqfSzKky7FR8mSdBPsnbvyLUzelLVfKvInQn9M+2+ aH1+I7ZVFqKxUgaJ2KAUxHxvw3NTdc2ZZ9tXbZFLAlG72SLunaVbph/iwQ8nQvOU+uXU1/ zxje73gm6KFpLgSLXkUoY2zJy8QOmYM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-554-9Hzw8yB3MFSKMQ7_y4BPbg-1; Thu, 20 Aug 2020 05:13:57 -0400 X-MC-Unique: 9Hzw8yB3MFSKMQ7_y4BPbg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 45AB71006706; Thu, 20 Aug 2020 09:13:56 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id D491A747B0; Thu, 20 Aug 2020 09:13:52 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 6/8] SVM: nSVM: cache whole nested vmcb instead of only its control area Date: Thu, 20 Aug 2020 12:13:25 +0300 Message-Id: <20200820091327.197807-7-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Until now we were only caching the 'control' are of the vmcb, but we will want soon to have some checks on the data area as well and this caching will allow us to fix various races that can happen if a (malicious) guest changes parts of the 'save' area during vm entry. No functional change intended other that slightly higher memory usage, since this patch doesn't touch the data area of the cached vmcb. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 96 +++++++++++++++++++++++---------------- arch/x86/kvm/svm/svm.c | 10 ++-- arch/x86/kvm/svm/svm.h | 15 +++--- 3 files changed, 69 insertions(+), 52 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b6704611fc02..c9bb17e9ba11 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -54,7 +54,7 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) { struct vcpu_svm *svm = to_svm(vcpu); - u64 cr3 = svm->nested.ctl.nested_cr3; + u64 cr3 = svm->nested.vmcb->control.nested_cr3; u64 pdpte; int ret; @@ -69,7 +69,7 @@ static unsigned long nested_svm_get_tdp_cr3(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - return svm->nested.ctl.nested_cr3; + return svm->nested.vmcb->control.nested_cr3; } static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) @@ -81,7 +81,7 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu) vcpu->arch.mmu = &vcpu->arch.guest_mmu; kvm_init_shadow_npt_mmu(vcpu, X86_CR0_PG, hsave->save.cr4, hsave->save.efer, - svm->nested.ctl.nested_cr3); + svm->nested.vmcb->control.nested_cr3); vcpu->arch.mmu->get_guest_pgd = nested_svm_get_tdp_cr3; vcpu->arch.mmu->get_pdptr = nested_svm_get_tdp_pdptr; vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit; @@ -106,7 +106,7 @@ void recalc_intercepts(struct vcpu_svm *svm) c = &svm->vmcb->control; h = &svm->nested.hsave->control; - g = &svm->nested.ctl; + g = &svm->nested.vmcb->control; svm->nested.host_intercept_exceptions = h->intercept_exceptions; @@ -176,7 +176,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) */ int i; - if (!(svm->nested.ctl.intercept & (1ULL << INTERCEPT_MSR_PROT))) + if (!(svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_MSR_PROT))) return true; for (i = 0; i < MSRPM_OFFSETS; i++) { @@ -187,7 +187,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) break; p = msrpm_offsets[i]; - offset = svm->nested.ctl.msrpm_base_pa + (p * 4); + offset = svm->nested.vmcb->control.msrpm_base_pa + (p * 4); if (kvm_vcpu_read_guest(&svm->vcpu, offset, &value, 4)) return false; @@ -255,12 +255,12 @@ static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb) static void load_nested_vmcb_control(struct vcpu_svm *svm, struct vmcb_control_area *control) { - copy_vmcb_control_area(&svm->nested.ctl, control); + copy_vmcb_control_area(&svm->nested.vmcb->control, control); /* Copy it here because nested_svm_check_controls will check it. */ - svm->nested.ctl.asid = control->asid; - svm->nested.ctl.msrpm_base_pa &= ~0x0fffULL; - svm->nested.ctl.iopm_base_pa &= ~0x0fffULL; + svm->nested.vmcb->control.asid = control->asid; + svm->nested.vmcb->control.msrpm_base_pa &= ~0x0fffULL; + svm->nested.vmcb->control.iopm_base_pa &= ~0x0fffULL; } /* @@ -270,12 +270,12 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, void sync_nested_vmcb_control(struct vcpu_svm *svm) { u32 mask; - svm->nested.ctl.event_inj = svm->vmcb->control.event_inj; - svm->nested.ctl.event_inj_err = svm->vmcb->control.event_inj_err; + svm->nested.vmcb->control.event_inj = svm->vmcb->control.event_inj; + svm->nested.vmcb->control.event_inj_err = svm->vmcb->control.event_inj_err; /* Only a few fields of int_ctl are written by the processor. */ mask = V_IRQ_MASK | V_TPR_MASK; - if (!(svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) && + if (!(svm->nested.vmcb->control.int_ctl & V_INTR_MASKING_MASK) && svm_is_intercept(svm, INTERCEPT_VINTR)) { /* * In order to request an interrupt window, L0 is usurping @@ -287,8 +287,8 @@ void sync_nested_vmcb_control(struct vcpu_svm *svm) */ mask &= ~V_IRQ_MASK; } - svm->nested.ctl.int_ctl &= ~mask; - svm->nested.ctl.int_ctl |= svm->vmcb->control.int_ctl & mask; + svm->nested.vmcb->control.int_ctl &= ~mask; + svm->nested.vmcb->control.int_ctl |= svm->vmcb->control.int_ctl & mask; } /* @@ -330,7 +330,7 @@ static void nested_vmcb_save_pending_event(struct vcpu_svm *svm, static inline bool nested_npt_enabled(struct vcpu_svm *svm) { - return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; + return svm->nested.vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; } /* @@ -399,20 +399,20 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) nested_svm_init_mmu_context(&svm->vcpu); svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = - svm->vcpu.arch.l1_tsc_offset + svm->nested.ctl.tsc_offset; + svm->vcpu.arch.l1_tsc_offset + svm->nested.vmcb->control.tsc_offset; svm->vmcb->control.int_ctl = - (svm->nested.ctl.int_ctl & ~mask) | + (svm->nested.vmcb->control.int_ctl & ~mask) | (svm->nested.hsave->control.int_ctl & mask); - svm->vmcb->control.virt_ext = svm->nested.ctl.virt_ext; - svm->vmcb->control.int_vector = svm->nested.ctl.int_vector; - svm->vmcb->control.int_state = svm->nested.ctl.int_state; - svm->vmcb->control.event_inj = svm->nested.ctl.event_inj; - svm->vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; + svm->vmcb->control.virt_ext = svm->nested.vmcb->control.virt_ext; + svm->vmcb->control.int_vector = svm->nested.vmcb->control.int_vector; + svm->vmcb->control.int_state = svm->nested.vmcb->control.int_state; + svm->vmcb->control.event_inj = svm->nested.vmcb->control.event_inj; + svm->vmcb->control.event_inj_err = svm->nested.vmcb->control.event_inj_err; - svm->vmcb->control.pause_filter_count = svm->nested.ctl.pause_filter_count; - svm->vmcb->control.pause_filter_thresh = svm->nested.ctl.pause_filter_thresh; + svm->vmcb->control.pause_filter_count = svm->nested.vmcb->control.pause_filter_count; + svm->vmcb->control.pause_filter_thresh = svm->nested.vmcb->control.pause_filter_thresh; /* Enter Guest-Mode */ enter_guest_mode(&svm->vcpu); @@ -622,10 +622,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm) if (svm->nrips_enabled) nested_vmcb->control.next_rip = vmcb->control.next_rip; - nested_vmcb->control.int_ctl = svm->nested.ctl.int_ctl; - nested_vmcb->control.tlb_ctl = svm->nested.ctl.tlb_ctl; - nested_vmcb->control.event_inj = svm->nested.ctl.event_inj; - nested_vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; + nested_vmcb->control.int_ctl = svm->nested.vmcb->control.int_ctl; + nested_vmcb->control.tlb_ctl = svm->nested.vmcb->control.tlb_ctl; + nested_vmcb->control.event_inj = svm->nested.vmcb->control.event_inj; + nested_vmcb->control.event_inj_err = svm->nested.vmcb->control.event_inj_err; nested_vmcb->control.pause_filter_count = svm->vmcb->control.pause_filter_count; @@ -638,7 +638,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset = svm->vcpu.arch.l1_tsc_offset; - svm->nested.ctl.nested_cr3 = 0; + svm->nested.vmcb->control.nested_cr3 = 0; /* Restore selected save entries */ svm->vmcb->save.es = hsave->save.es; @@ -692,6 +692,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) int svm_allocate_nested(struct vcpu_svm *svm) { struct page *hsave_page; + struct page *vmcb_page; if (svm->nested.initialized) return 0; @@ -707,8 +708,18 @@ int svm_allocate_nested(struct vcpu_svm *svm) if (!svm->nested.msrpm) goto free_page2; + vmcb_page = alloc_page(GFP_KERNEL_ACCOUNT); + if (!vmcb_page) + goto free_page3; + + svm->nested.vmcb = page_address(vmcb_page); + clear_page(svm->nested.vmcb); + svm->nested.initialized = true; return 0; + +free_page3: + svm_vcpu_free_msrpm(svm->nested.msrpm); free_page2: __free_page(hsave_page); free_page1: @@ -726,6 +737,9 @@ void svm_free_nested(struct vcpu_svm *svm) __free_page(virt_to_page(svm->nested.hsave)); svm->nested.hsave = NULL; + __free_page(virt_to_page(svm->nested.vmcb)); + svm->nested.vmcb = NULL; + svm->nested.initialized = false; } @@ -750,7 +764,7 @@ static int nested_svm_exit_handled_msr(struct vcpu_svm *svm) u32 offset, msr, value; int write, mask; - if (!(svm->nested.ctl.intercept & (1ULL << INTERCEPT_MSR_PROT))) + if (!(svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_MSR_PROT))) return NESTED_EXIT_HOST; msr = svm->vcpu.arch.regs[VCPU_REGS_RCX]; @@ -764,7 +778,9 @@ static int nested_svm_exit_handled_msr(struct vcpu_svm *svm) /* Offset is in 32 bit units but need in 8 bit units */ offset *= 4; - if (kvm_vcpu_read_guest(&svm->vcpu, svm->nested.ctl.msrpm_base_pa + offset, &value, 4)) + if (kvm_vcpu_read_guest(&svm->vcpu, + svm->nested.vmcb->control.msrpm_base_pa + offset, + &value, 4)) return NESTED_EXIT_DONE; return (value & mask) ? NESTED_EXIT_DONE : NESTED_EXIT_HOST; @@ -777,13 +793,13 @@ static int nested_svm_intercept_ioio(struct vcpu_svm *svm) u8 start_bit; u64 gpa; - if (!(svm->nested.ctl.intercept & (1ULL << INTERCEPT_IOIO_PROT))) + if (!(svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_IOIO_PROT))) return NESTED_EXIT_HOST; port = svm->vmcb->control.exit_info_1 >> 16; size = (svm->vmcb->control.exit_info_1 & SVM_IOIO_SIZE_MASK) >> SVM_IOIO_SIZE_SHIFT; - gpa = svm->nested.ctl.iopm_base_pa + (port / 8); + gpa = svm->nested.vmcb->control.iopm_base_pa + (port / 8); start_bit = port % 8; iopm_len = (start_bit + size > 8) ? 2 : 1; mask = (0xf >> (4 - size)) << start_bit; @@ -809,13 +825,13 @@ static int nested_svm_intercept(struct vcpu_svm *svm) break; case SVM_EXIT_READ_CR0 ... SVM_EXIT_WRITE_CR8: { u32 bit = 1U << (exit_code - SVM_EXIT_READ_CR0); - if (svm->nested.ctl.intercept_cr & bit) + if (svm->nested.vmcb->control.intercept_cr & bit) vmexit = NESTED_EXIT_DONE; break; } case SVM_EXIT_READ_DR0 ... SVM_EXIT_WRITE_DR7: { u32 bit = 1U << (exit_code - SVM_EXIT_READ_DR0); - if (svm->nested.ctl.intercept_dr & bit) + if (svm->nested.vmcb->control.intercept_dr & bit) vmexit = NESTED_EXIT_DONE; break; } @@ -834,7 +850,7 @@ static int nested_svm_intercept(struct vcpu_svm *svm) } default: { u64 exit_bits = 1ULL << (exit_code - SVM_EXIT_INTR); - if (svm->nested.ctl.intercept & exit_bits) + if (svm->nested.vmcb->control.intercept & exit_bits) vmexit = NESTED_EXIT_DONE; } } @@ -874,7 +890,7 @@ static bool nested_exit_on_exception(struct vcpu_svm *svm) { unsigned int nr = svm->vcpu.arch.exception.nr; - return (svm->nested.ctl.intercept_exceptions & (1 << nr)); + return (svm->nested.vmcb->control.intercept_exceptions & (1 << nr)); } static void nested_svm_inject_exception_vmexit(struct vcpu_svm *svm) @@ -942,7 +958,7 @@ static void nested_svm_intr(struct vcpu_svm *svm) static inline bool nested_exit_on_init(struct vcpu_svm *svm) { - return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_INIT)); + return (svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_INIT)); } static void nested_svm_init(struct vcpu_svm *svm) @@ -1084,7 +1100,7 @@ static int svm_get_nested_state(struct kvm_vcpu *vcpu, */ if (clear_user(user_vmcb, KVM_STATE_NESTED_SVM_VMCB_SIZE)) return -EFAULT; - if (copy_to_user(&user_vmcb->control, &svm->nested.ctl, + if (copy_to_user(&user_vmcb->control, &svm->nested.vmcb->control, sizeof(user_vmcb->control))) return -EFAULT; if (copy_to_user(&user_vmcb->save, &svm->nested.hsave->save, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d941acc36b50..0af51b54c9f5 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1400,8 +1400,8 @@ static void svm_clear_vintr(struct vcpu_svm *svm) svm->nested.hsave->control.int_ctl &= mask; WARN_ON((svm->vmcb->control.int_ctl & V_TPR_MASK) != - (svm->nested.ctl.int_ctl & V_TPR_MASK)); - svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl & ~mask; + (svm->nested.vmcb->control.int_ctl & V_TPR_MASK)); + svm->vmcb->control.int_ctl |= svm->nested.vmcb->control.int_ctl & ~mask; } vmcb_mark_dirty(svm->vmcb, VMCB_INTR); @@ -2224,7 +2224,7 @@ static bool check_selective_cr0_intercepted(struct vcpu_svm *svm, bool ret = false; u64 intercept; - intercept = svm->nested.ctl.intercept; + intercept = svm->nested.vmcb->control.intercept; if (!is_guest_mode(&svm->vcpu) || (!(intercept & (1ULL << INTERCEPT_SELECTIVE_CR0)))) @@ -3132,7 +3132,7 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) if (is_guest_mode(vcpu)) { /* As long as interrupts are being delivered... */ - if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) + if ((svm->nested.vmcb->control.int_ctl & V_INTR_MASKING_MASK) ? !(svm->nested.hsave->save.rflags & X86_EFLAGS_IF) : !(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) return true; @@ -3751,7 +3751,7 @@ static int svm_check_intercept(struct kvm_vcpu *vcpu, info->intercept == x86_intercept_clts) break; - intercept = svm->nested.ctl.intercept; + intercept = svm->nested.vmcb->control.intercept; if (!(intercept & (1ULL << INTERCEPT_SELECTIVE_CR0))) break; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 9dca64a2edb5..1669755f796e 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -85,7 +85,11 @@ struct svm_nested_state { struct vmcb *hsave; u64 hsave_msr; u64 vm_cr_msr; + + /* guest mode vmcb, aka vmcb12*/ + struct vmcb *vmcb; u64 vmcb_gpa; + u32 host_intercept_exceptions; /* These are the merged vectors */ @@ -95,9 +99,6 @@ struct svm_nested_state { * we cannot inject a nested vmexit yet. */ bool nested_run_pending; - /* cache for control fields of the guest */ - struct vmcb_control_area ctl; - bool initialized; }; @@ -373,22 +374,22 @@ static inline bool nested_svm_virtualize_tpr(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK); + return is_guest_mode(vcpu) && (svm->nested.vmcb->control.int_ctl & V_INTR_MASKING_MASK); } static inline bool nested_exit_on_smi(struct vcpu_svm *svm) { - return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_SMI)); + return (svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_SMI)); } static inline bool nested_exit_on_intr(struct vcpu_svm *svm) { - return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_INTR)); + return (svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_INTR)); } static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) { - return (svm->nested.ctl.intercept & (1ULL << INTERCEPT_NMI)); + return (svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_NMI)); } int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, From patchwork Thu Aug 20 09:13:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52DB7739 for ; Thu, 20 Aug 2020 09:14:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3567D207FB for ; Thu, 20 Aug 2020 09:14:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GImhwjea" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727033AbgHTJO4 (ORCPT ); Thu, 20 Aug 2020 05:14:56 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:43264 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726885AbgHTJOU (ORCPT ); Thu, 20 Aug 2020 05:14:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S2GO/ekihvTaQm7HCtkeS85AogNNM/HDnW7xAaWMsro=; b=GImhwjeaCMnDz4JLl73oIsHBSgXY0L9Es6iZsft7NvDO7vcVfvjROu8m5XJxfehR8EtKmW UiFNlRUtRiRpbX/T29UQQNUpGPQPmvpWtOxH4CnUYHwDmbDKhiGNOJkuLvigA38tEdayuN FLLrxVX5BXFXeS75fdvv5hoTJhX+nYY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-102-jdS6GFKRNNedpR9pGK4lMA-1; Thu, 20 Aug 2020 05:14:01 -0400 X-MC-Unique: jdS6GFKRNNedpR9pGK4lMA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1856B8030AD; Thu, 20 Aug 2020 09:14:00 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id A9272747B0; Thu, 20 Aug 2020 09:13:56 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 7/8] KVM: nSVM: implement caching of nested vmcb save area Date: Thu, 20 Aug 2020 12:13:26 +0300 Message-Id: <20200820091327.197807-8-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Soon we will have some checks on the save area and this caching will allow us to read the guest's vmcb save area once and then use it, thus avoiding case when a malicious guest changes it after we verified it, but before we copied parts of it to the main vmcb. On nested vmexit also sync the updated save state area of the guest, to the cache, although this is probably overkill. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 131 ++++++++++++++++++++++++++------------ arch/x86/kvm/svm/svm.c | 6 +- arch/x86/kvm/svm/svm.h | 4 +- 3 files changed, 97 insertions(+), 44 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c9bb17e9ba11..acc4b26fcfcc 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -215,9 +215,11 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control) return true; } -static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb) +static bool nested_vmcb_checks(struct vcpu_svm *svm) { bool nested_vmcb_lma; + struct vmcb *vmcb = svm->nested.vmcb; + if ((vmcb->save.efer & EFER_SVME) == 0) return false; @@ -263,6 +265,43 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, svm->nested.vmcb->control.iopm_base_pa &= ~0x0fffULL; } +static void load_nested_vmcb_save(struct vcpu_svm *svm, + struct vmcb_save_area *save) +{ + svm->nested.vmcb->save.rflags = save->rflags; + svm->nested.vmcb->save.rax = save->rax; + svm->nested.vmcb->save.rsp = save->rsp; + svm->nested.vmcb->save.rip = save->rip; + + svm->nested.vmcb->save.es = save->es; + svm->nested.vmcb->save.cs = save->cs; + svm->nested.vmcb->save.ss = save->ss; + svm->nested.vmcb->save.ds = save->ds; + svm->nested.vmcb->save.cpl = save->cpl; + + svm->nested.vmcb->save.gdtr = save->gdtr; + svm->nested.vmcb->save.idtr = save->idtr; + + svm->nested.vmcb->save.efer = save->efer; + svm->nested.vmcb->save.cr3 = save->cr3; + svm->nested.vmcb->save.cr4 = save->cr4; + svm->nested.vmcb->save.cr0 = save->cr0; + + svm->nested.vmcb->save.cr2 = save->cr2; + + svm->nested.vmcb->save.dr7 = save->dr7; + svm->nested.vmcb->save.dr6 = save->dr6; + + svm->nested.vmcb->save.g_pat = save->g_pat; +} + +void load_nested_vmcb(struct vcpu_svm *svm, struct vmcb *nested_vmcb, u64 vmcb_gpa) +{ + svm->nested.vmcb_gpa = vmcb_gpa; + load_nested_vmcb_control(svm, &nested_vmcb->control); + load_nested_vmcb_save(svm, &nested_vmcb->save); +} + /* * Synchronize fields that are written by the processor, so that * they can be copied back into the nested_vmcb. @@ -364,31 +403,31 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, return 0; } -static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +static void nested_prepare_vmcb_save(struct vcpu_svm *svm) { /* Load the nested guest state */ - svm->vmcb->save.es = nested_vmcb->save.es; - svm->vmcb->save.cs = nested_vmcb->save.cs; - svm->vmcb->save.ss = nested_vmcb->save.ss; - svm->vmcb->save.ds = nested_vmcb->save.ds; - svm->vmcb->save.gdtr = nested_vmcb->save.gdtr; - svm->vmcb->save.idtr = nested_vmcb->save.idtr; - kvm_set_rflags(&svm->vcpu, nested_vmcb->save.rflags); - svm_set_efer(&svm->vcpu, nested_vmcb->save.efer); - svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0); - svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4); - svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; - kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax); - kvm_rsp_write(&svm->vcpu, nested_vmcb->save.rsp); - kvm_rip_write(&svm->vcpu, nested_vmcb->save.rip); + svm->vmcb->save.es = svm->nested.vmcb->save.es; + svm->vmcb->save.cs = svm->nested.vmcb->save.cs; + svm->vmcb->save.ss = svm->nested.vmcb->save.ss; + svm->vmcb->save.ds = svm->nested.vmcb->save.ds; + svm->vmcb->save.gdtr = svm->nested.vmcb->save.gdtr; + svm->vmcb->save.idtr = svm->nested.vmcb->save.idtr; + kvm_set_rflags(&svm->vcpu, svm->nested.vmcb->save.rflags); + svm_set_efer(&svm->vcpu, svm->nested.vmcb->save.efer); + svm_set_cr0(&svm->vcpu, svm->nested.vmcb->save.cr0); + svm_set_cr4(&svm->vcpu, svm->nested.vmcb->save.cr4); + svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = svm->nested.vmcb->save.cr2; + kvm_rax_write(&svm->vcpu, svm->nested.vmcb->save.rax); + kvm_rsp_write(&svm->vcpu, svm->nested.vmcb->save.rsp); + kvm_rip_write(&svm->vcpu, svm->nested.vmcb->save.rip); /* In case we don't even reach vcpu_run, the fields are not updated */ - svm->vmcb->save.rax = nested_vmcb->save.rax; - svm->vmcb->save.rsp = nested_vmcb->save.rsp; - svm->vmcb->save.rip = nested_vmcb->save.rip; - svm->vmcb->save.dr7 = nested_vmcb->save.dr7; - svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; - svm->vmcb->save.cpl = nested_vmcb->save.cpl; + svm->vmcb->save.rax = svm->nested.vmcb->save.rax; + svm->vmcb->save.rsp = svm->nested.vmcb->save.rsp; + svm->vmcb->save.rip = svm->nested.vmcb->save.rip; + svm->vmcb->save.dr7 = svm->nested.vmcb->save.dr7; + svm->vcpu.arch.dr6 = svm->nested.vmcb->save.dr6; + svm->vmcb->save.cpl = svm->nested.vmcb->save.cpl; } static void nested_prepare_vmcb_control(struct vcpu_svm *svm) @@ -426,17 +465,13 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) vmcb_mark_all_dirty(svm->vmcb); } -int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb) +int enter_svm_guest_mode(struct vcpu_svm *svm) { int ret; - - svm->nested.vmcb_gpa = vmcb_gpa; - load_nested_vmcb_control(svm, &nested_vmcb->control); - nested_prepare_vmcb_save(svm, nested_vmcb); + nested_prepare_vmcb_save(svm); nested_prepare_vmcb_control(svm); - ret = nested_svm_load_cr3(&svm->vcpu, nested_vmcb->save.cr3, + ret = nested_svm_load_cr3(&svm->vcpu, svm->nested.vmcb->save.cr3, nested_npt_enabled(svm)); if (ret) return ret; @@ -476,7 +511,9 @@ int nested_svm_vmrun(struct vcpu_svm *svm) if (WARN_ON(!svm->nested.initialized)) return 1; - if (!nested_vmcb_checks(svm, nested_vmcb)) { + load_nested_vmcb(svm, nested_vmcb, vmcb_gpa); + + if (!nested_vmcb_checks(svm)) { nested_vmcb->control.exit_code = SVM_EXIT_ERR; nested_vmcb->control.exit_code_hi = 0; nested_vmcb->control.exit_info_1 = 0; @@ -485,15 +522,15 @@ int nested_svm_vmrun(struct vcpu_svm *svm) } trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, - nested_vmcb->save.rip, - nested_vmcb->control.int_ctl, - nested_vmcb->control.event_inj, - nested_vmcb->control.nested_ctl); + svm->nested.vmcb->save.rip, + svm->nested.vmcb->control.int_ctl, + svm->nested.vmcb->control.event_inj, + svm->nested.vmcb->control.nested_ctl); - trace_kvm_nested_intercepts(nested_vmcb->control.intercept_cr & 0xffff, - nested_vmcb->control.intercept_cr >> 16, - nested_vmcb->control.intercept_exceptions, - nested_vmcb->control.intercept); + trace_kvm_nested_intercepts(svm->nested.vmcb->control.intercept_cr & 0xffff, + svm->nested.vmcb->control.intercept_cr >> 16, + svm->nested.vmcb->control.intercept_exceptions, + svm->nested.vmcb->control.intercept); /* Clear internal status */ kvm_clear_exception_queue(&svm->vcpu); @@ -525,7 +562,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) svm->nested.nested_run_pending = 1; - if (enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb)) + if (enter_svm_guest_mode(svm)) goto out_exit_err; if (nested_svm_vmrun_msrpm(svm)) @@ -632,6 +669,15 @@ int nested_svm_vmexit(struct vcpu_svm *svm) nested_vmcb->control.pause_filter_thresh = svm->vmcb->control.pause_filter_thresh; + /* + * Write back the nested vmcb state that we just updated + * to the nested vmcb cache to keep it up to date + * + * Note: since CPU might have changed the values we can't + * trust clean bits + */ + load_nested_vmcb_save(svm, &nested_vmcb->save); + /* Restore the original control entries */ copy_vmcb_control_area(&vmcb->control, &hsave->control); @@ -1191,6 +1237,13 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, load_nested_vmcb_control(svm, &ctl); nested_prepare_vmcb_control(svm); + /* + * TODO: cached nested guest vmcb data area is not restored, thus + * it will be invalid till nested guest vmexits. + * It shoudn't matter much since the area is not supposed to be + * in sync with cpu anyway while nested guest is running + */ + out_set_gif: svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET)); return 0; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0af51b54c9f5..06668e0f93e7 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3904,7 +3904,6 @@ static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) { struct vcpu_svm *svm = to_svm(vcpu); - struct vmcb *nested_vmcb; struct kvm_host_map map; u64 guest; u64 vmcb_gpa; @@ -3925,8 +3924,9 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL) return 1; - nested_vmcb = map.hva; - ret = enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb); + load_nested_vmcb(svm, map.hva, vmcb); + ret = enter_svm_guest_mode(svm); + kvm_vcpu_unmap(&svm->vcpu, &map, true); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 1669755f796e..80231ef8de6f 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -392,8 +392,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) return (svm->nested.vmcb->control.intercept & (1ULL << INTERCEPT_NMI)); } -int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb); +int enter_svm_guest_mode(struct vcpu_svm *svm); void svm_leave_nested(struct vcpu_svm *svm); void svm_free_nested(struct vcpu_svm *svm); int svm_allocate_nested(struct vcpu_svm *svm); @@ -406,6 +405,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, bool has_error_code, u32 error_code); int nested_svm_exit_special(struct vcpu_svm *svm); void sync_nested_vmcb_control(struct vcpu_svm *svm); +void load_nested_vmcb(struct vcpu_svm *svm, struct vmcb *nested_vmcb, u64 vmcb_gpa); extern struct kvm_x86_nested_ops svm_nested_ops; From patchwork Thu Aug 20 09:13:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11725963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 538A6722 for ; Thu, 20 Aug 2020 09:15:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 31DCD207FB for ; Thu, 20 Aug 2020 09:15:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dGh0Gy0B" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727069AbgHTJPG (ORCPT ); Thu, 20 Aug 2020 05:15:06 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:39531 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726772AbgHTJOY (ORCPT ); Thu, 20 Aug 2020 05:14:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597914862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RKnLQhTu+aSzcK2uUxgPf7M4GOvsqYW4Ht9dOT+N7Rk=; b=dGh0Gy0BI1SB4g9Xqzt3zLbMuug+akbb76kTyjkVpJ5k42XOz1BkJrk/ImB6ddIdfJt+/3 HG8DAd8VRGoaArnf0WhEKexQh875N/URG1bqL8HYeZ0TSK1YxSwUobLll8FYfI4yg8g4VT 4aU4YdDGM904SnzV87aobFhiFWH/S+Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-53-oCujtFq_PImT4NyE74UQAg-1; Thu, 20 Aug 2020 05:14:05 -0400 X-MC-Unique: oCujtFq_PImT4NyE74UQAg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DE7C4425D5; Thu, 20 Aug 2020 09:14:03 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B9C87E309; Thu, 20 Aug 2020 09:14:00 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Jim Mattson , Joerg Roedel , Paolo Bonzini , Borislav Petkov , Thomas Gleixner , "H. Peter Anvin" , Vitaly Kuznetsov , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Ingo Molnar , Wanpeng Li , Sean Christopherson , Maxim Levitsky Subject: [PATCH 8/8] KVM: nSVM: read only changed fields of the nested guest data area Date: Thu, 20 Aug 2020 12:13:27 +0300 Message-Id: <20200820091327.197807-9-mlevitsk@redhat.com> In-Reply-To: <20200820091327.197807-1-mlevitsk@redhat.com> References: <20200820091327.197807-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This allows us to only read fields that are marked as dirty by the nested guest on vmentry. I doubt that this has any perf impact but this way it is a bit closer to real hardware. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 58 +++++++++++++++++++++++++-------------- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 5 ++++ 3 files changed, 44 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index acc4b26fcfcc..f3eef48caee6 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -266,40 +266,57 @@ static void load_nested_vmcb_control(struct vcpu_svm *svm, } static void load_nested_vmcb_save(struct vcpu_svm *svm, - struct vmcb_save_area *save) + struct vmcb_save_area *save, + u32 clean) { svm->nested.vmcb->save.rflags = save->rflags; svm->nested.vmcb->save.rax = save->rax; svm->nested.vmcb->save.rsp = save->rsp; svm->nested.vmcb->save.rip = save->rip; - svm->nested.vmcb->save.es = save->es; - svm->nested.vmcb->save.cs = save->cs; - svm->nested.vmcb->save.ss = save->ss; - svm->nested.vmcb->save.ds = save->ds; - svm->nested.vmcb->save.cpl = save->cpl; + if (is_dirty(clean, VMCB_SEG)) { + svm->nested.vmcb->save.es = save->es; + svm->nested.vmcb->save.cs = save->cs; + svm->nested.vmcb->save.ss = save->ss; + svm->nested.vmcb->save.ds = save->ds; + svm->nested.vmcb->save.cpl = save->cpl; + } - svm->nested.vmcb->save.gdtr = save->gdtr; - svm->nested.vmcb->save.idtr = save->idtr; + if (is_dirty(clean, VMCB_DT)) { + svm->nested.vmcb->save.gdtr = save->gdtr; + svm->nested.vmcb->save.idtr = save->idtr; + } - svm->nested.vmcb->save.efer = save->efer; - svm->nested.vmcb->save.cr3 = save->cr3; - svm->nested.vmcb->save.cr4 = save->cr4; - svm->nested.vmcb->save.cr0 = save->cr0; + if (is_dirty(clean, VMCB_CR)) { + svm->nested.vmcb->save.efer = save->efer; + svm->nested.vmcb->save.cr3 = save->cr3; + svm->nested.vmcb->save.cr4 = save->cr4; + svm->nested.vmcb->save.cr0 = save->cr0; + } - svm->nested.vmcb->save.cr2 = save->cr2; + if (is_dirty(clean, VMCB_CR2)) + svm->nested.vmcb->save.cr2 = save->cr2; - svm->nested.vmcb->save.dr7 = save->dr7; - svm->nested.vmcb->save.dr6 = save->dr6; + if (is_dirty(clean, VMCB_DR)) { + svm->nested.vmcb->save.dr7 = save->dr7; + svm->nested.vmcb->save.dr6 = save->dr6; + } - svm->nested.vmcb->save.g_pat = save->g_pat; + if ((clean & VMCB_NPT) == 0) + svm->nested.vmcb->save.g_pat = save->g_pat; } void load_nested_vmcb(struct vcpu_svm *svm, struct vmcb *nested_vmcb, u64 vmcb_gpa) { - svm->nested.vmcb_gpa = vmcb_gpa; + u32 clean = nested_vmcb->control.clean; + + if (svm->nested.vmcb_gpa != vmcb_gpa) { + svm->nested.vmcb_gpa = vmcb_gpa; + clean = 0; + } + load_nested_vmcb_control(svm, &nested_vmcb->control); - load_nested_vmcb_save(svm, &nested_vmcb->save); + load_nested_vmcb_save(svm, &nested_vmcb->save, clean); } /* @@ -619,7 +636,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm) /* Exit Guest-Mode */ leave_guest_mode(&svm->vcpu); - svm->nested.vmcb_gpa = 0; WARN_ON_ONCE(svm->nested.nested_run_pending); /* in case we halted in L2 */ @@ -676,7 +692,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) * Note: since CPU might have changed the values we can't * trust clean bits */ - load_nested_vmcb_save(svm, &nested_vmcb->save); + load_nested_vmcb_save(svm, &nested_vmcb->save, 0); /* Restore the original control entries */ copy_vmcb_control_area(&vmcb->control, &hsave->control); @@ -759,6 +775,7 @@ int svm_allocate_nested(struct vcpu_svm *svm) goto free_page3; svm->nested.vmcb = page_address(vmcb_page); + svm->nested.vmcb_gpa = U64_MAX; clear_page(svm->nested.vmcb); svm->nested.initialized = true; @@ -785,6 +802,7 @@ void svm_free_nested(struct vcpu_svm *svm) __free_page(virt_to_page(svm->nested.vmcb)); svm->nested.vmcb = NULL; + svm->nested.vmcb_gpa = U64_MAX; svm->nested.initialized = false; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 06668e0f93e7..f0bb7f622dca 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3924,7 +3924,7 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) if (kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map) == -EINVAL) return 1; - load_nested_vmcb(svm, map.hva, vmcb); + load_nested_vmcb(svm, map.hva, vmcb_gpa); ret = enter_svm_guest_mode(svm); kvm_vcpu_unmap(&svm->vcpu, &map, true); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 80231ef8de6f..4a383c519fdf 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -204,6 +204,11 @@ static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit) vmcb->control.clean &= ~(1 << bit); } +static inline bool is_dirty(u32 clean, int bit) +{ + return (clean & (1 << bit)) == 0; +} + static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu) { return container_of(vcpu, struct vcpu_svm, vcpu);