From patchwork Thu Aug 27 17:04:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741087 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A37D109B for ; Thu, 27 Aug 2020 17:04:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70B7A2177B for ; Thu, 27 Aug 2020 17:04:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BL0CmQEx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727008AbgH0REx (ORCPT ); Thu, 27 Aug 2020 13:04:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:39033 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726323AbgH0REs (ORCPT ); Thu, 27 Aug 2020 13:04:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547887; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uchtyp5DVRtl0VqaI00HMpgs89YiVvUyiaRTklLzvhY=; b=BL0CmQExBTdyGge8kvt2gJySOuAWJb33hIchAC6og25uO6fEO+QVIuAhhbYwn/H8XYCSyl L6+ncOMD2k+2oTIln3OqH6Wr+ePAIe8kuWmx2Z8WGh18YZDMGNhGlVB7iivyErS1MNOIvN BCJtk9KOE0hpYSfc6+nGq03qEyPCI2w= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-550-c_UqdJtcP4i5a4dCMJIU0Q-1; Thu, 27 Aug 2020 13:04:45 -0400 X-MC-Unique: c_UqdJtcP4i5a4dCMJIU0Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E8E251029D20; Thu, 27 Aug 2020 17:04:43 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id 80A6219D7D; Thu, 27 Aug 2020 17:04:40 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 1/8] KVM: SVM: rename a variable in the svm_create_vcpu Date: Thu, 27 Aug 2020 20:04:27 +0300 Message-Id: <20200827170434.284680-2-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The 'page' is to hold the vcpu's vmcb so name it as such to avoid confusion. Signed-off-by: Maxim Levitsky Reviewed-by: Jim Mattson --- arch/x86/kvm/svm/svm.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0cfb8c08e744e..722769eaaf8ce 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1171,7 +1171,7 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) static int svm_create_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm; - struct page *page; + struct page *vmcb_page; struct page *msrpm_pages; struct page *hsave_page; struct page *nested_msrpm_pages; @@ -1181,8 +1181,8 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm = to_svm(vcpu); err = -ENOMEM; - page = alloc_page(GFP_KERNEL_ACCOUNT); - if (!page) + vmcb_page = alloc_page(GFP_KERNEL_ACCOUNT); + if (!vmcb_page) goto out; msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); @@ -1216,9 +1216,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->nested.msrpm = page_address(nested_msrpm_pages); svm_vcpu_init_msrpm(svm->nested.msrpm); - svm->vmcb = page_address(page); + svm->vmcb = page_address(vmcb_page); clear_page(svm->vmcb); - svm->vmcb_pa = __sme_set(page_to_pfn(page) << PAGE_SHIFT); + svm->vmcb_pa = __sme_set(page_to_pfn(vmcb_page) << PAGE_SHIFT); svm->asid_generation = 0; init_vmcb(svm); @@ -1234,7 +1234,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) free_page2: __free_pages(msrpm_pages, MSRPM_ALLOC_ORDER); free_page1: - __free_page(page); + __free_page(vmcb_page); out: return err; } From patchwork Thu Aug 27 17:04:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741093 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2BCD3109B for ; Thu, 27 Aug 2020 17:05:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E696422BF5 for ; Thu, 27 Aug 2020 17:05:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GOMNrW70" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727783AbgH0RFE (ORCPT ); Thu, 27 Aug 2020 13:05:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:53208 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbgH0RFA (ORCPT ); Thu, 27 Aug 2020 13:05:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547897; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Aqr0sMWnpP1Y1u+jH+cboJwF6hlUJ7t1+za2Gs6ehks=; b=GOMNrW703a8vaL52rG1b/zISuk8HhJabPKFZ4yKg7AkFTKMz5B2kKfGXVGYH1kZps+bHfz WSxGz2wC4Y86iNcm7UACg1o1PDGJfVQCCPEIdYxCr51BF/eizIxl4QdN/ycO8v/f32mAdK hZbj3ZvcYjPxrDHzkeZaUBldvIP0SeM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-295-g36zNeOqO_a0vkdRagKgXQ-1; Thu, 27 Aug 2020 13:04:50 -0400 X-MC-Unique: g36zNeOqO_a0vkdRagKgXQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A38DF85C706; Thu, 27 Aug 2020 17:04:48 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id 55F6A196F3; Thu, 27 Aug 2020 17:04:44 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 2/8] KVM: nSVM: rename nested vmcb to vmcb12 Date: Thu, 27 Aug 2020 20:04:28 +0300 Message-Id: <20200827170434.284680-3-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is to be more consistient with VMX, and to support upcoming addition of vmcb02 Hopefully no functional changes. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 225 +++++++++++++++++++------------------- arch/x86/kvm/svm/svm.c | 10 +- arch/x86/kvm/svm/svm.h | 2 +- 3 files changed, 118 insertions(+), 119 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e90bc436f5849..7b1c98826c365 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -215,41 +215,39 @@ static bool nested_vmcb_check_controls(struct vmcb_control_area *control) return true; } -static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb) +static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12) { - bool nested_vmcb_lma; - if ((vmcb->save.efer & EFER_SVME) == 0) + bool vmcb12_lma; + + if ((vmcb12->save.efer & EFER_SVME) == 0) return false; - if (((vmcb->save.cr0 & X86_CR0_CD) == 0) && - (vmcb->save.cr0 & X86_CR0_NW)) + if (((vmcb12->save.cr0 & X86_CR0_CD) == 0) && (vmcb12->save.cr0 & X86_CR0_NW)) return false; - if (!kvm_dr6_valid(vmcb->save.dr6) || !kvm_dr7_valid(vmcb->save.dr7)) + if (!kvm_dr6_valid(vmcb12->save.dr6) || !kvm_dr7_valid(vmcb12->save.dr7)) return false; - nested_vmcb_lma = - (vmcb->save.efer & EFER_LME) && - (vmcb->save.cr0 & X86_CR0_PG); + vmcb12_lma = (vmcb12->save.efer & EFER_LME) && (vmcb12->save.cr0 & X86_CR0_PG); - if (!nested_vmcb_lma) { - if (vmcb->save.cr4 & X86_CR4_PAE) { - if (vmcb->save.cr3 & MSR_CR3_LEGACY_PAE_RESERVED_MASK) + if (!vmcb12_lma) { + if (vmcb12->save.cr4 & X86_CR4_PAE) { + if (vmcb12->save.cr3 & MSR_CR3_LEGACY_PAE_RESERVED_MASK) return false; } else { - if (vmcb->save.cr3 & MSR_CR3_LEGACY_RESERVED_MASK) + if (vmcb12->save.cr3 & MSR_CR3_LEGACY_RESERVED_MASK) return false; } } else { - if (!(vmcb->save.cr4 & X86_CR4_PAE) || - !(vmcb->save.cr0 & X86_CR0_PE) || - (vmcb->save.cr3 & MSR_CR3_LONG_RESERVED_MASK)) + if (!(vmcb12->save.cr4 & X86_CR4_PAE) || + !(vmcb12->save.cr0 & X86_CR0_PE) || + (vmcb12->save.cr3 & MSR_CR3_LONG_RESERVED_MASK)) return false; } - if (kvm_valid_cr4(&svm->vcpu, vmcb->save.cr4)) + if (kvm_valid_cr4(&svm->vcpu, vmcb12->save.cr4)) return false; - return nested_vmcb_check_controls(&vmcb->control); + return nested_vmcb_check_controls(&vmcb12->control); } static void load_nested_vmcb_control(struct vcpu_svm *svm, @@ -296,7 +294,7 @@ void sync_nested_vmcb_control(struct vcpu_svm *svm) * EXIT_INT_INFO. */ static void nested_vmcb_save_pending_event(struct vcpu_svm *svm, - struct vmcb *nested_vmcb) + struct vmcb *vmcb12) { struct kvm_vcpu *vcpu = &svm->vcpu; u32 exit_int_info = 0; @@ -308,7 +306,7 @@ static void nested_vmcb_save_pending_event(struct vcpu_svm *svm, if (vcpu->arch.exception.has_error_code) { exit_int_info |= SVM_EVTINJ_VALID_ERR; - nested_vmcb->control.exit_int_info_err = + vmcb12->control.exit_int_info_err = vcpu->arch.exception.error_code; } @@ -325,7 +323,7 @@ static void nested_vmcb_save_pending_event(struct vcpu_svm *svm, exit_int_info |= SVM_EVTINJ_TYPE_INTR; } - nested_vmcb->control.exit_int_info = exit_int_info; + vmcb12->control.exit_int_info = exit_int_info; } static inline bool nested_npt_enabled(struct vcpu_svm *svm) @@ -364,31 +362,31 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, return 0; } -static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb) +static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *vmcb12) { /* Load the nested guest state */ - svm->vmcb->save.es = nested_vmcb->save.es; - svm->vmcb->save.cs = nested_vmcb->save.cs; - svm->vmcb->save.ss = nested_vmcb->save.ss; - svm->vmcb->save.ds = nested_vmcb->save.ds; - svm->vmcb->save.gdtr = nested_vmcb->save.gdtr; - svm->vmcb->save.idtr = nested_vmcb->save.idtr; - kvm_set_rflags(&svm->vcpu, nested_vmcb->save.rflags); - svm_set_efer(&svm->vcpu, nested_vmcb->save.efer); - svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0); - svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4); - svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2; - kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax); - kvm_rsp_write(&svm->vcpu, nested_vmcb->save.rsp); - kvm_rip_write(&svm->vcpu, nested_vmcb->save.rip); + svm->vmcb->save.es = vmcb12->save.es; + svm->vmcb->save.cs = vmcb12->save.cs; + svm->vmcb->save.ss = vmcb12->save.ss; + svm->vmcb->save.ds = vmcb12->save.ds; + svm->vmcb->save.gdtr = vmcb12->save.gdtr; + svm->vmcb->save.idtr = vmcb12->save.idtr; + kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags); + svm_set_efer(&svm->vcpu, vmcb12->save.efer); + svm_set_cr0(&svm->vcpu, vmcb12->save.cr0); + svm_set_cr4(&svm->vcpu, vmcb12->save.cr4); + svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2; + kvm_rax_write(&svm->vcpu, vmcb12->save.rax); + kvm_rsp_write(&svm->vcpu, vmcb12->save.rsp); + kvm_rip_write(&svm->vcpu, vmcb12->save.rip); /* In case we don't even reach vcpu_run, the fields are not updated */ - svm->vmcb->save.rax = nested_vmcb->save.rax; - svm->vmcb->save.rsp = nested_vmcb->save.rsp; - svm->vmcb->save.rip = nested_vmcb->save.rip; - svm->vmcb->save.dr7 = nested_vmcb->save.dr7; - svm->vcpu.arch.dr6 = nested_vmcb->save.dr6; - svm->vmcb->save.cpl = nested_vmcb->save.cpl; + svm->vmcb->save.rax = vmcb12->save.rax; + svm->vmcb->save.rsp = vmcb12->save.rsp; + svm->vmcb->save.rip = vmcb12->save.rip; + svm->vmcb->save.dr7 = vmcb12->save.dr7; + svm->vcpu.arch.dr6 = vmcb12->save.dr6; + svm->vmcb->save.cpl = vmcb12->save.cpl; } static void nested_prepare_vmcb_control(struct vcpu_svm *svm) @@ -426,17 +424,17 @@ static void nested_prepare_vmcb_control(struct vcpu_svm *svm) vmcb_mark_all_dirty(svm->vmcb); } -int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, - struct vmcb *nested_vmcb) +int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa, + struct vmcb *vmcb12) { int ret; - svm->nested.vmcb = vmcb_gpa; - load_nested_vmcb_control(svm, &nested_vmcb->control); - nested_prepare_vmcb_save(svm, nested_vmcb); + svm->nested.vmcb12_gpa = vmcb12_gpa; + load_nested_vmcb_control(svm, &vmcb12->control); + nested_prepare_vmcb_save(svm, vmcb12); nested_prepare_vmcb_control(svm); - ret = nested_svm_load_cr3(&svm->vcpu, nested_vmcb->save.cr3, + ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3, nested_npt_enabled(svm)); if (ret) return ret; @@ -449,19 +447,19 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, int nested_svm_vmrun(struct vcpu_svm *svm) { int ret; - struct vmcb *nested_vmcb; + struct vmcb *vmcb12; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; struct kvm_host_map map; - u64 vmcb_gpa; + u64 vmcb12_gpa; if (is_smm(&svm->vcpu)) { kvm_queue_exception(&svm->vcpu, UD_VECTOR); return 1; } - vmcb_gpa = svm->vmcb->save.rax; - ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map); + vmcb12_gpa = svm->vmcb->save.rax; + ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb12_gpa), &map); if (ret == -EINVAL) { kvm_inject_gp(&svm->vcpu, 0); return 1; @@ -471,26 +469,26 @@ int nested_svm_vmrun(struct vcpu_svm *svm) ret = kvm_skip_emulated_instruction(&svm->vcpu); - nested_vmcb = map.hva; + vmcb12 = map.hva; - if (!nested_vmcb_checks(svm, nested_vmcb)) { - nested_vmcb->control.exit_code = SVM_EXIT_ERR; - nested_vmcb->control.exit_code_hi = 0; - nested_vmcb->control.exit_info_1 = 0; - nested_vmcb->control.exit_info_2 = 0; + if (!nested_vmcb_checks(svm, vmcb12)) { + vmcb12->control.exit_code = SVM_EXIT_ERR; + vmcb12->control.exit_code_hi = 0; + vmcb12->control.exit_info_1 = 0; + vmcb12->control.exit_info_2 = 0; goto out; } - trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, - nested_vmcb->save.rip, - nested_vmcb->control.int_ctl, - nested_vmcb->control.event_inj, - nested_vmcb->control.nested_ctl); + trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb12_gpa, + vmcb12->save.rip, + vmcb12->control.int_ctl, + vmcb12->control.event_inj, + vmcb12->control.nested_ctl); - trace_kvm_nested_intercepts(nested_vmcb->control.intercept_cr & 0xffff, - nested_vmcb->control.intercept_cr >> 16, - nested_vmcb->control.intercept_exceptions, - nested_vmcb->control.intercept); + trace_kvm_nested_intercepts(vmcb12->control.intercept_cr & 0xffff, + vmcb12->control.intercept_cr >> 16, + vmcb12->control.intercept_exceptions, + vmcb12->control.intercept); /* Clear internal status */ kvm_clear_exception_queue(&svm->vcpu); @@ -522,7 +520,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm) svm->nested.nested_run_pending = 1; - if (enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb)) + if (enter_svm_guest_mode(svm, vmcb12_gpa, vmcb12)) goto out_exit_err; if (nested_svm_vmrun_msrpm(svm)) @@ -563,23 +561,23 @@ void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) int nested_svm_vmexit(struct vcpu_svm *svm) { int rc; - struct vmcb *nested_vmcb; + struct vmcb *vmcb12; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; struct kvm_host_map map; - rc = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->nested.vmcb), &map); + rc = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(svm->nested.vmcb12_gpa), &map); if (rc) { if (rc == -EINVAL) kvm_inject_gp(&svm->vcpu, 0); return 1; } - nested_vmcb = map.hva; + vmcb12 = map.hva; /* Exit Guest-Mode */ leave_guest_mode(&svm->vcpu); - svm->nested.vmcb = 0; + svm->nested.vmcb12_gpa = 0; WARN_ON_ONCE(svm->nested.nested_run_pending); /* in case we halted in L2 */ @@ -587,45 +585,45 @@ int nested_svm_vmexit(struct vcpu_svm *svm) /* Give the current vmcb to the guest */ - nested_vmcb->save.es = vmcb->save.es; - nested_vmcb->save.cs = vmcb->save.cs; - nested_vmcb->save.ss = vmcb->save.ss; - nested_vmcb->save.ds = vmcb->save.ds; - nested_vmcb->save.gdtr = vmcb->save.gdtr; - nested_vmcb->save.idtr = vmcb->save.idtr; - nested_vmcb->save.efer = svm->vcpu.arch.efer; - nested_vmcb->save.cr0 = kvm_read_cr0(&svm->vcpu); - nested_vmcb->save.cr3 = kvm_read_cr3(&svm->vcpu); - nested_vmcb->save.cr2 = vmcb->save.cr2; - nested_vmcb->save.cr4 = svm->vcpu.arch.cr4; - nested_vmcb->save.rflags = kvm_get_rflags(&svm->vcpu); - nested_vmcb->save.rip = kvm_rip_read(&svm->vcpu); - nested_vmcb->save.rsp = kvm_rsp_read(&svm->vcpu); - nested_vmcb->save.rax = kvm_rax_read(&svm->vcpu); - nested_vmcb->save.dr7 = vmcb->save.dr7; - nested_vmcb->save.dr6 = svm->vcpu.arch.dr6; - nested_vmcb->save.cpl = vmcb->save.cpl; - - nested_vmcb->control.int_state = vmcb->control.int_state; - nested_vmcb->control.exit_code = vmcb->control.exit_code; - nested_vmcb->control.exit_code_hi = vmcb->control.exit_code_hi; - nested_vmcb->control.exit_info_1 = vmcb->control.exit_info_1; - nested_vmcb->control.exit_info_2 = vmcb->control.exit_info_2; - - if (nested_vmcb->control.exit_code != SVM_EXIT_ERR) - nested_vmcb_save_pending_event(svm, nested_vmcb); + vmcb12->save.es = vmcb->save.es; + vmcb12->save.cs = vmcb->save.cs; + vmcb12->save.ss = vmcb->save.ss; + vmcb12->save.ds = vmcb->save.ds; + vmcb12->save.gdtr = vmcb->save.gdtr; + vmcb12->save.idtr = vmcb->save.idtr; + vmcb12->save.efer = svm->vcpu.arch.efer; + vmcb12->save.cr0 = kvm_read_cr0(&svm->vcpu); + vmcb12->save.cr3 = kvm_read_cr3(&svm->vcpu); + vmcb12->save.cr2 = vmcb->save.cr2; + vmcb12->save.cr4 = svm->vcpu.arch.cr4; + vmcb12->save.rflags = kvm_get_rflags(&svm->vcpu); + vmcb12->save.rip = kvm_rip_read(&svm->vcpu); + vmcb12->save.rsp = kvm_rsp_read(&svm->vcpu); + vmcb12->save.rax = kvm_rax_read(&svm->vcpu); + vmcb12->save.dr7 = vmcb->save.dr7; + vmcb12->save.dr6 = svm->vcpu.arch.dr6; + vmcb12->save.cpl = vmcb->save.cpl; + + vmcb12->control.int_state = vmcb->control.int_state; + vmcb12->control.exit_code = vmcb->control.exit_code; + vmcb12->control.exit_code_hi = vmcb->control.exit_code_hi; + vmcb12->control.exit_info_1 = vmcb->control.exit_info_1; + vmcb12->control.exit_info_2 = vmcb->control.exit_info_2; + + if (vmcb12->control.exit_code != SVM_EXIT_ERR) + nested_vmcb_save_pending_event(svm, vmcb12); if (svm->nrips_enabled) - nested_vmcb->control.next_rip = vmcb->control.next_rip; + vmcb12->control.next_rip = vmcb->control.next_rip; - nested_vmcb->control.int_ctl = svm->nested.ctl.int_ctl; - nested_vmcb->control.tlb_ctl = svm->nested.ctl.tlb_ctl; - nested_vmcb->control.event_inj = svm->nested.ctl.event_inj; - nested_vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; + vmcb12->control.int_ctl = svm->nested.ctl.int_ctl; + vmcb12->control.tlb_ctl = svm->nested.ctl.tlb_ctl; + vmcb12->control.event_inj = svm->nested.ctl.event_inj; + vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err; - nested_vmcb->control.pause_filter_count = + vmcb12->control.pause_filter_count = svm->vmcb->control.pause_filter_count; - nested_vmcb->control.pause_filter_thresh = + vmcb12->control.pause_filter_thresh = svm->vmcb->control.pause_filter_thresh; /* Restore the original control entries */ @@ -659,11 +657,11 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb_mark_all_dirty(svm->vmcb); - trace_kvm_nested_vmexit_inject(nested_vmcb->control.exit_code, - nested_vmcb->control.exit_info_1, - nested_vmcb->control.exit_info_2, - nested_vmcb->control.exit_int_info, - nested_vmcb->control.exit_int_info_err, + trace_kvm_nested_vmexit_inject(vmcb12->control.exit_code, + vmcb12->control.exit_info_1, + vmcb12->control.exit_info_2, + vmcb12->control.exit_int_info, + vmcb12->control.exit_int_info_err, KVM_ISA_SVM); kvm_vcpu_unmap(&svm->vcpu, &map, true); @@ -1020,7 +1018,7 @@ static int svm_get_nested_state(struct kvm_vcpu *vcpu, /* First fill in the header and copy it out. */ if (is_guest_mode(vcpu)) { - kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb; + kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb12_gpa; kvm_state.size += KVM_STATE_NESTED_SVM_VMCB_SIZE; kvm_state.flags |= KVM_STATE_NESTED_GUEST_MODE; @@ -1130,7 +1128,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, copy_vmcb_control_area(&hsave->control, &svm->vmcb->control); hsave->save = save; - svm->nested.vmcb = kvm_state->hdr.svm.vmcb_pa; + svm->nested.vmcb12_gpa = kvm_state->hdr.svm.vmcb_pa; + load_nested_vmcb_control(svm, &ctl); nested_prepare_vmcb_control(svm); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 722769eaaf8ce..c75a68b2a9c2a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1102,7 +1102,7 @@ static void init_vmcb(struct vcpu_svm *svm) } svm->asid_generation = 0; - svm->nested.vmcb = 0; + svm->nested.vmcb12_gpa = 0; svm->vcpu.arch.hflags = 0; if (!kvm_pause_in_guest(svm->vcpu.kvm)) { @@ -3884,7 +3884,7 @@ static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate) /* FED8h - SVM Guest */ put_smstate(u64, smstate, 0x7ed8, 1); /* FEE0h - SVM Guest VMCB Physical Address */ - put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb); + put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa); svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; @@ -3906,7 +3906,7 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) { u64 saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0); u64 guest = GET_SMSTATE(u64, smstate, 0x7ed8); - u64 vmcb = GET_SMSTATE(u64, smstate, 0x7ee0); + u64 vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0); if (guest) { if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) @@ -3916,10 +3916,10 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) return 1; if (kvm_vcpu_map(&svm->vcpu, - gpa_to_gfn(vmcb), &map) == -EINVAL) + gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL) return 1; - ret = enter_svm_guest_mode(svm, vmcb, map.hva); + ret = enter_svm_guest_mode(svm, vmcb12_gpa, map.hva); kvm_vcpu_unmap(&svm->vcpu, &map, true); } } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index a798e17317094..ab913468f9cbe 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -85,7 +85,7 @@ struct svm_nested_state { struct vmcb *hsave; u64 hsave_msr; u64 vm_cr_msr; - u64 vmcb; + u64 vmcb12_gpa; u32 host_intercept_exceptions; /* These are the merged vectors */ From patchwork Thu Aug 27 17:04:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741091 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DFA33138A for ; Thu, 27 Aug 2020 17:05:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C459622BEB for ; Thu, 27 Aug 2020 17:05:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Y3id/zWk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727124AbgH0RFD (ORCPT ); Thu, 27 Aug 2020 13:05:03 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:45983 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726944AbgH0RFA (ORCPT ); Thu, 27 Aug 2020 13:05:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547898; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E6bW46JAkdhYmrpCe60gWsTL1FTiV3jCRUB/2VQNyrk=; b=Y3id/zWk9FKog0sCzxpjf6PuCJIEYrkfC2SPd4sJfi5pqCLVXsE3qCaIUMIsTZP0pMygK3 6lKFoG0jOtod8tE/C8YH/MdRFDSxpdM6ZQiH8SC7r11cs8p/YbHYZcpNqO9TkMGQYCZuMP KXhT8cydGR7Ed0zTPYLyAWwZnwX+RpI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-442-zqZGtTpcPj2Mw0TIaDDOBg-1; Thu, 27 Aug 2020 13:04:57 -0400 X-MC-Unique: zqZGtTpcPj2Mw0TIaDDOBg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 77412420EB; Thu, 27 Aug 2020 17:04:52 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11B07196F3; Thu, 27 Aug 2020 17:04:48 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 3/8] KVM: SVM: refactor msr permission bitmap allocation Date: Thu, 27 Aug 2020 20:04:29 +0300 Message-Id: <20200827170434.284680-4-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace svm_vcpu_init_msrpm with svm_vcpu_alloc_msrpm, that also allocates the msr bitmap and add svm_vcpu_free_msrpm to free it. This will be used later to move the nested msr permission bitmap allocation to nested.c Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 45 +++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index c75a68b2a9c2a..ddbb05614af4f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -609,18 +609,29 @@ static void set_msr_interception(u32 *msrpm, unsigned msr, msrpm[offset] = tmp; } -static void svm_vcpu_init_msrpm(u32 *msrpm) +static u32 *svm_vcpu_init_msrpm(void) { int i; + u32 *msrpm; + struct page *pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); + + if (!pages) + return NULL; + msrpm = page_address(pages); memset(msrpm, 0xff, PAGE_SIZE * (1 << MSRPM_ALLOC_ORDER)); for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { if (!direct_access_msrs[i].always) continue; - set_msr_interception(msrpm, direct_access_msrs[i].index, 1, 1); } + return msrpm; +} + +static void svm_vcpu_free_msrpm(u32 *msrpm) +{ + __free_pages(virt_to_page(msrpm), MSRPM_ALLOC_ORDER); } static void add_msr_offset(u32 offset) @@ -1172,9 +1183,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm; struct page *vmcb_page; - struct page *msrpm_pages; struct page *hsave_page; - struct page *nested_msrpm_pages; int err; BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0); @@ -1185,21 +1194,13 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (!vmcb_page) goto out; - msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); - if (!msrpm_pages) - goto free_page1; - - nested_msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER); - if (!nested_msrpm_pages) - goto free_page2; - hsave_page = alloc_page(GFP_KERNEL_ACCOUNT); if (!hsave_page) - goto free_page3; + goto free_page1; err = avic_init_vcpu(svm); if (err) - goto free_page4; + goto free_page2; /* We initialize this flag to true to make sure that the is_running * bit would be set the first time the vcpu is loaded. @@ -1210,11 +1211,13 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->nested.hsave = page_address(hsave_page); clear_page(svm->nested.hsave); - svm->msrpm = page_address(msrpm_pages); - svm_vcpu_init_msrpm(svm->msrpm); + svm->msrpm = svm_vcpu_init_msrpm(); + if (!svm->msrpm) + goto free_page2; - svm->nested.msrpm = page_address(nested_msrpm_pages); - svm_vcpu_init_msrpm(svm->nested.msrpm); + svm->nested.msrpm = svm_vcpu_init_msrpm(); + if (!svm->nested.msrpm) + goto free_page3; svm->vmcb = page_address(vmcb_page); clear_page(svm->vmcb); @@ -1227,12 +1230,10 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) return 0; -free_page4: - __free_page(hsave_page); free_page3: - __free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER); + svm_vcpu_free_msrpm(svm->msrpm); free_page2: - __free_pages(msrpm_pages, MSRPM_ALLOC_ORDER); + __free_page(hsave_page); free_page1: __free_page(vmcb_page); out: From patchwork Thu Aug 27 17:04:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741105 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01DAA1731 for ; Thu, 27 Aug 2020 17:06:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DDF642087E for ; Thu, 27 Aug 2020 17:06:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="G4F0ClBp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727107AbgH0RFC (ORCPT ); Thu, 27 Aug 2020 13:05:02 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:46273 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727046AbgH0RFB (ORCPT ); Thu, 27 Aug 2020 13:05:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547899; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Z9HwJLjNCnZ8FyxzkvUNBGMCw11CMJz/sGhvSdVH40A=; b=G4F0ClBp5XwZiaQXLW5N1SI0tnLnmmfiugQN33LqURS+83fqAeigFH6EbnNfwnSn6yBQVA mwK5UpNlu61W9UEueMr+8jviedVJXrb+fzUt2/ze3/F8avApyswXTvU6nX8viradRFf5Y5 rZI/mI2vjQck3xp4aJ58Mu/8rRkwsQc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-155-h45REhTFOc-mk5j2ZEpJjQ-1; Thu, 27 Aug 2020 13:04:58 -0400 X-MC-Unique: h45REhTFOc-mk5j2ZEpJjQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4C451107464C; Thu, 27 Aug 2020 17:04:56 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id DA8F4196F3; Thu, 27 Aug 2020 17:04:52 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 4/8] KVM: SVM: use __GFP_ZERO instead of clear_page Date: Thu, 27 Aug 2020 20:04:30 +0300 Message-Id: <20200827170434.284680-5-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Another small refactoring. Suggested-by: Jim Mattson Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ddbb05614af4f..290b2d0cd78e3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1190,11 +1190,11 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm = to_svm(vcpu); err = -ENOMEM; - vmcb_page = alloc_page(GFP_KERNEL_ACCOUNT); + vmcb_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!vmcb_page) goto out; - hsave_page = alloc_page(GFP_KERNEL_ACCOUNT); + hsave_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!hsave_page) goto free_page1; @@ -1209,7 +1209,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->avic_is_running = true; svm->nested.hsave = page_address(hsave_page); - clear_page(svm->nested.hsave); svm->msrpm = svm_vcpu_init_msrpm(); if (!svm->msrpm) @@ -1220,7 +1219,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) goto free_page3; svm->vmcb = page_address(vmcb_page); - clear_page(svm->vmcb); svm->vmcb_pa = __sme_set(page_to_pfn(vmcb_page) << PAGE_SHIFT); svm->asid_generation = 0; init_vmcb(svm); From patchwork Thu Aug 27 17:04:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741103 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD63E17C7 for ; Thu, 27 Aug 2020 17:05:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A56C322BF5 for ; Thu, 27 Aug 2020 17:05:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="it1TP+nh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727040AbgH0RFx (ORCPT ); Thu, 27 Aug 2020 13:05:53 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:25561 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727831AbgH0RFH (ORCPT ); Thu, 27 Aug 2020 13:05:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547906; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W/YGWSLvG6lY5R83KUArknDiX8BRYjrRNnmUZit1EZI=; b=it1TP+nhbyvKMKD4sqVyQ2F5VGiXfY8qo4YyFAMugQBl3TrpEnW7rTZcB7M0woaKZ2VAaz cD5Wlt+s2+PyJMwxJwnnyVfMUhc+fxK7F1YFtFFuCtzy2XddU53PCZ66qjVSN4cA5LVXFC AeyZ026sMq7Roh4HzottpeSNQ0JOOXs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-574-6nczjpt-Ol2xEaivnHONIQ-1; Thu, 27 Aug 2020 13:05:04 -0400 X-MC-Unique: 6nczjpt-Ol2xEaivnHONIQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 41B1818BFEC4; Thu, 27 Aug 2020 17:05:02 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id AEB3B196F3; Thu, 27 Aug 2020 17:04:56 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 5/8] KVM: SVM: refactor exit labels in svm_create_vcpu Date: Thu, 27 Aug 2020 20:04:31 +0300 Message-Id: <20200827170434.284680-6-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Kernel coding style suggests not to use labels like error1,error2 Suggested-by: Jim Mattson Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 290b2d0cd78e3..b617579095277 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1196,11 +1196,11 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) hsave_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); if (!hsave_page) - goto free_page1; + goto error_free_vmcb_page; err = avic_init_vcpu(svm); if (err) - goto free_page2; + goto error_free_hsave_page; /* We initialize this flag to true to make sure that the is_running * bit would be set the first time the vcpu is loaded. @@ -1212,11 +1212,11 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->msrpm = svm_vcpu_init_msrpm(); if (!svm->msrpm) - goto free_page2; + goto error_free_hsave_page; svm->nested.msrpm = svm_vcpu_init_msrpm(); if (!svm->nested.msrpm) - goto free_page3; + goto error_free_msrpm; svm->vmcb = page_address(vmcb_page); svm->vmcb_pa = __sme_set(page_to_pfn(vmcb_page) << PAGE_SHIFT); @@ -1228,11 +1228,11 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) return 0; -free_page3: +error_free_msrpm: svm_vcpu_free_msrpm(svm->msrpm); -free_page2: +error_free_hsave_page: __free_page(hsave_page); -free_page1: +error_free_vmcb_page: __free_page(vmcb_page); out: return err; From patchwork Thu Aug 27 17:04:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741097 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CCA31731 for ; Thu, 27 Aug 2020 17:05:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4054322BF5 for ; Thu, 27 Aug 2020 17:05:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UHV5n6C8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727897AbgH0RFV (ORCPT ); Thu, 27 Aug 2020 13:05:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:57524 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727835AbgH0RFS (ORCPT ); Thu, 27 Aug 2020 13:05:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547913; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0eSuGan0/UkqgTX87tH1fFC/jOPqKwkNi+GUUV3aL8Q=; b=UHV5n6C8eeqNUAPcy6k2CbSDSdZbzoKmGWSS0vcBijTC8nt1P3RL2Ha2aMKTFkNsHtROg3 6REY0UgiauhKtl509FPyhPzVLbSYoiWvG3GMAYeamaej8RhxoxdSvOIVJnw1FbU71CYdV7 uPR8SbxH5hM+Og6YboC228W70s3UsCc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-245-qgF9efg2PbG7bxcVPs3iVQ-1; Thu, 27 Aug 2020 13:05:11 -0400 X-MC-Unique: qgF9efg2PbG7bxcVPs3iVQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 017FCF6C0E; Thu, 27 Aug 2020 17:05:09 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id A417C19936; Thu, 27 Aug 2020 17:05:02 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 6/8] KVM: x86: allow kvm_x86_ops.set_efer to return a value Date: Thu, 27 Aug 2020 20:04:32 +0300 Message-Id: <20200827170434.284680-7-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be used later to return an error when setting this msr fails. Note that we ignore this return value for qemu initiated writes to avoid breaking backward compatibility. Signed-off-by: Maxim Levitsky --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/svm.c | 3 ++- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/vmx/vmx.c | 9 ++++++--- arch/x86/kvm/x86.c | 3 ++- 5 files changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 5303dbc5c9bce..b273c199b9a55 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1069,7 +1069,7 @@ struct kvm_x86_ops { void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l); void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); int (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4); - void (*set_efer)(struct kvm_vcpu *vcpu, u64 efer); + int (*set_efer)(struct kvm_vcpu *vcpu, u64 efer); void (*get_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*set_idt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b617579095277..4c92432e33e27 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -263,7 +263,7 @@ static int get_max_npt_level(void) #endif } -void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) +int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_svm *svm = to_svm(vcpu); vcpu->arch.efer = efer; @@ -283,6 +283,7 @@ void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) svm->vmcb->save.efer = efer | EFER_SVME; vmcb_mark_dirty(svm->vmcb, VMCB_CR); + return 0; } static int is_external_interrupt(u32 info) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index ab913468f9cbe..468c58a915347 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -349,7 +349,7 @@ static inline bool gif_set(struct vcpu_svm *svm) #define MSR_INVALID 0xffffffffU u32 svm_msrpm_offset(u32 msr); -void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); +int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); void svm_flush_tlb(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 46ba2e03a8926..c2b60d242c2bd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2862,13 +2862,15 @@ static void enter_rmode(struct kvm_vcpu *vcpu) kvm_mmu_reset_context(vcpu); } -void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) +int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_vmx *vmx = to_vmx(vcpu); struct shared_msr_entry *msr = find_msr_entry(vmx, MSR_EFER); - if (!msr) - return; + if (!msr) { + /* Host doen't support EFER, nothing to do */ + return 0; + } vcpu->arch.efer = efer; if (efer & EFER_LMA) { @@ -2880,6 +2882,7 @@ void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) msr->data = efer & ~EFER_LME; } setup_msrs(vmx); + return 0; } #ifdef CONFIG_X86_64 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 539ea1cd6020c..3fed4e4367b24 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1471,7 +1471,8 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info) efer &= ~EFER_LMA; efer |= vcpu->arch.efer & EFER_LMA; - kvm_x86_ops.set_efer(vcpu, efer); + if (kvm_x86_ops.set_efer(vcpu, efer) && !msr_info->host_initiated) + return 1; /* Update reserved bits */ if ((efer ^ old_efer) & EFER_NX) From patchwork Thu Aug 27 17:04:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741099 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0FEA109B for ; Thu, 27 Aug 2020 17:05:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C196622BF5 for ; Thu, 27 Aug 2020 17:05:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="SzuTbeJL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727835AbgH0RF1 (ORCPT ); Thu, 27 Aug 2020 13:05:27 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:54825 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727864AbgH0RFR (ORCPT ); Thu, 27 Aug 2020 13:05:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547916; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3ROoCGGpTRrb/9L2mOI1+wpaSOaUSIzU0RUeWqIshIE=; b=SzuTbeJLA1eZrsQiXws5FOfQQEJzO8d5aM4+xc9IvcE6tqBx2zegV0KQVkqVYPBhuU+OWd 3emUKSt8qSUa2yPP4aX3XZU8DeAsXjWEhEeQlPoVEG6+dM9/Eyq4dfLOzA25qNdJqGfqx3 gY5SkewWVzGmGTZD+ouU/zrbdFzvUvs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-189-e8aQGa0rPeqZdyR92wyf6A-1; Thu, 27 Aug 2020 13:05:14 -0400 X-MC-Unique: e8aQGa0rPeqZdyR92wyf6A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CB00010ABDC4; Thu, 27 Aug 2020 17:05:12 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6384619D7D; Thu, 27 Aug 2020 17:05:09 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 7/8] KVM: emulator: more strict rsm checks. Date: Thu, 27 Aug 2020 20:04:33 +0300 Message-Id: <20200827170434.284680-8-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Don't ignore return values in rsm_load_state_64/32 to avoid loading invalid state from SMM state area if it was tampered with by the guest. This is primarly intended to avoid letting guest set bits in EFER (like EFER.SVME when nesting is disabled) by manipulating SMM save area. The DR checks are probably redundant, since the code sets them to fixed value, but it won't hurt to have them Signed-off-by: Maxim Levitsky --- arch/x86/kvm/emulate.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index d0e2825ae6174..1d450d7710d63 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2505,9 +2505,14 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, *reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4); val = GET_SMSTATE(u32, smstate, 0x7fcc); - ctxt->ops->set_dr(ctxt, 6, (val & DR6_VOLATILE) | DR6_FIXED_1); + + if (ctxt->ops->set_dr(ctxt, 6, (val & DR6_VOLATILE) | DR6_FIXED_1)) + return X86EMUL_UNHANDLEABLE; + val = GET_SMSTATE(u32, smstate, 0x7fc8); - ctxt->ops->set_dr(ctxt, 7, (val & DR7_VOLATILE) | DR7_FIXED_1); + + if (ctxt->ops->set_dr(ctxt, 7, (val & DR7_VOLATILE) | DR7_FIXED_1)) + return X86EMUL_UNHANDLEABLE; selector = GET_SMSTATE(u32, smstate, 0x7fc4); set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64)); @@ -2560,16 +2565,23 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED; val = GET_SMSTATE(u32, smstate, 0x7f68); - ctxt->ops->set_dr(ctxt, 6, (val & DR6_VOLATILE) | DR6_FIXED_1); + + if (ctxt->ops->set_dr(ctxt, 6, (val & DR6_VOLATILE) | DR6_FIXED_1)) + return X86EMUL_UNHANDLEABLE; + val = GET_SMSTATE(u32, smstate, 0x7f60); - ctxt->ops->set_dr(ctxt, 7, (val & DR7_VOLATILE) | DR7_FIXED_1); + + if (ctxt->ops->set_dr(ctxt, 7, (val & DR7_VOLATILE) | DR7_FIXED_1)) + return X86EMUL_UNHANDLEABLE; cr0 = GET_SMSTATE(u64, smstate, 0x7f58); cr3 = GET_SMSTATE(u64, smstate, 0x7f50); cr4 = GET_SMSTATE(u64, smstate, 0x7f48); ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00)); val = GET_SMSTATE(u64, smstate, 0x7ed0); - ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA); + + if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA)) + return X86EMUL_UNHANDLEABLE; selector = GET_SMSTATE(u32, smstate, 0x7e90); rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8); From patchwork Thu Aug 27 17:04:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 11741101 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7283217C7 for ; Thu, 27 Aug 2020 17:05:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E95D22BEB for ; Thu, 27 Aug 2020 17:05:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EHr9tvz7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727881AbgH0RF1 (ORCPT ); Thu, 27 Aug 2020 13:05:27 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:39122 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727889AbgH0RFW (ORCPT ); Thu, 27 Aug 2020 13:05:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598547920; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KUHpH0p58RXd9F5tC26xxqbPOkXpUxjXrNqrdx4sRqQ=; b=EHr9tvz7UQEwSic/3L1CEcC7w/hnrHknMyOb8sIX/AP/1dDScfjv5FueEwftZ61XgOiGDp +pLlyTpMS4C+i6Qfw3G4znN8MfvtSoiLwOki/KtNJXVdWLml0RBfyY3K0lU2R/ogtuFH32 pv7oe35UEl8oWHLBts8hvNDIW1ChACo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-58-9lcyJg9UM9G7DhIQsTg6xg-1; Thu, 27 Aug 2020 13:05:18 -0400 X-MC-Unique: 9lcyJg9UM9G7DhIQsTg6xg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A0474107B765; Thu, 27 Aug 2020 17:05:16 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.185]) by smtp.corp.redhat.com (Postfix) with ESMTP id 388F919D7D; Thu, 27 Aug 2020 17:05:13 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Joerg Roedel , Borislav Petkov , Paolo Bonzini , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Ingo Molnar , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Jim Mattson , Wanpeng Li , Thomas Gleixner , Sean Christopherson , Maxim Levitsky Subject: [PATCH 8/8] KVM: nSVM: implement ondemand allocation of the nested state Date: Thu, 27 Aug 2020 20:04:34 +0300 Message-Id: <20200827170434.284680-9-mlevitsk@redhat.com> In-Reply-To: <20200827170434.284680-1-mlevitsk@redhat.com> References: <20200827170434.284680-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This way we don't waste memory on VMs which don't use nesting virtualization even if it is available to them. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 42 +++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 52 +++++++++++++++++++++------------------ arch/x86/kvm/svm/svm.h | 6 +++++ 3 files changed, 76 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 7b1c98826c365..e2f8e94e6b83c 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -471,6 +471,9 @@ int nested_svm_vmrun(struct vcpu_svm *svm) vmcb12 = map.hva; + if (WARN_ON(!svm->nested.initialized)) + return 1; + if (!nested_vmcb_checks(svm, vmcb12)) { vmcb12->control.exit_code = SVM_EXIT_ERR; vmcb12->control.exit_code_hi = 0; @@ -686,6 +689,45 @@ int nested_svm_vmexit(struct vcpu_svm *svm) return 0; } +int svm_allocate_nested(struct vcpu_svm *svm) +{ + struct page *hsave_page; + + if (svm->nested.initialized) + return 0; + + hsave_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (!hsave_page) + goto error; + + svm->nested.hsave = page_address(hsave_page); + + svm->nested.msrpm = svm_vcpu_init_msrpm(); + if (!svm->nested.msrpm) + goto err_free_hsave; + + svm->nested.initialized = true; + return 0; +err_free_hsave: + __free_page(hsave_page); +error: + return 1; +} + +void svm_free_nested(struct vcpu_svm *svm) +{ + if (!svm->nested.initialized) + return; + + svm_vcpu_free_msrpm(svm->nested.msrpm); + svm->nested.msrpm = NULL; + + __free_page(virt_to_page(svm->nested.hsave)); + svm->nested.hsave = NULL; + + svm->nested.initialized = false; +} + /* * Forcibly leave nested mode in order to be able to reset the VCPU later on. */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4c92432e33e27..7ab142ed9c5c0 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -266,6 +266,7 @@ static int get_max_npt_level(void) int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_svm *svm = to_svm(vcpu); + u64 old_efer = vcpu->arch.efer; vcpu->arch.efer = efer; if (!npt_enabled) { @@ -276,14 +277,31 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer) efer &= ~EFER_LME; } - if (!(efer & EFER_SVME)) { - svm_leave_nested(svm); - svm_set_gif(svm, true); + if ((old_efer & EFER_SVME) != (efer & EFER_SVME)) { + if (!(efer & EFER_SVME)) { + svm_leave_nested(svm); + svm_set_gif(svm, true); + + /* + * Free the nested state unless we are in SMM, in which + * case the exit from SVM mode is only for duration of the SMI + * handler + */ + if (!is_smm(&svm->vcpu)) + svm_free_nested(svm); + + } else { + if (svm_allocate_nested(svm)) + goto error; + } } svm->vmcb->save.efer = efer | EFER_SVME; vmcb_mark_dirty(svm->vmcb, VMCB_CR); return 0; +error: + vcpu->arch.efer = old_efer; + return 1; } static int is_external_interrupt(u32 info) @@ -610,7 +628,7 @@ static void set_msr_interception(u32 *msrpm, unsigned msr, msrpm[offset] = tmp; } -static u32 *svm_vcpu_init_msrpm(void) +u32 *svm_vcpu_init_msrpm(void) { int i; u32 *msrpm; @@ -630,7 +648,7 @@ static u32 *svm_vcpu_init_msrpm(void) return msrpm; } -static void svm_vcpu_free_msrpm(u32 *msrpm) +void svm_vcpu_free_msrpm(u32 *msrpm) { __free_pages(virt_to_page(msrpm), MSRPM_ALLOC_ORDER); } @@ -1184,7 +1202,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm; struct page *vmcb_page; - struct page *hsave_page; int err; BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0); @@ -1195,13 +1212,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (!vmcb_page) goto out; - hsave_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); - if (!hsave_page) - goto error_free_vmcb_page; - err = avic_init_vcpu(svm); if (err) - goto error_free_hsave_page; + goto out; /* We initialize this flag to true to make sure that the is_running * bit would be set the first time the vcpu is loaded. @@ -1209,15 +1222,9 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) if (irqchip_in_kernel(vcpu->kvm) && kvm_apicv_activated(vcpu->kvm)) svm->avic_is_running = true; - svm->nested.hsave = page_address(hsave_page); - svm->msrpm = svm_vcpu_init_msrpm(); if (!svm->msrpm) - goto error_free_hsave_page; - - svm->nested.msrpm = svm_vcpu_init_msrpm(); - if (!svm->nested.msrpm) - goto error_free_msrpm; + goto error_free_vmcb_page; svm->vmcb = page_address(vmcb_page); svm->vmcb_pa = __sme_set(page_to_pfn(vmcb_page) << PAGE_SHIFT); @@ -1229,10 +1236,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) return 0; -error_free_msrpm: - svm_vcpu_free_msrpm(svm->msrpm); -error_free_hsave_page: - __free_page(hsave_page); error_free_vmcb_page: __free_page(vmcb_page); out: @@ -1258,10 +1261,10 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) */ svm_clear_current_vmcb(svm->vmcb); + svm_free_nested(svm); + __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); - __free_page(virt_to_page(svm->nested.hsave)); - __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); } static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) @@ -3919,6 +3922,7 @@ static int svm_pre_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL) return 1; + svm_allocate_nested(svm); ret = enter_svm_guest_mode(svm, vmcb12_gpa, map.hva); kvm_vcpu_unmap(&svm->vcpu, &map, true); } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 468c58a915347..cb9c4a8f26f50 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -97,6 +97,8 @@ struct svm_nested_state { /* cache for control fields of the guest */ struct vmcb_control_area ctl; + + bool initialized; }; struct vcpu_svm { @@ -349,6 +351,8 @@ static inline bool gif_set(struct vcpu_svm *svm) #define MSR_INVALID 0xffffffffU u32 svm_msrpm_offset(u32 msr); +u32 *svm_vcpu_init_msrpm(void); +void svm_vcpu_free_msrpm(u32 *msrpm); int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4); @@ -390,6 +394,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm) int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, struct vmcb *nested_vmcb); void svm_leave_nested(struct vcpu_svm *svm); +void svm_free_nested(struct vcpu_svm *svm); +int svm_allocate_nested(struct vcpu_svm *svm); int nested_svm_vmrun(struct vcpu_svm *svm); void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); int nested_svm_vmexit(struct vcpu_svm *svm);