From patchwork Mon Dec 18 17:17:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 10120985 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 16AEF60327 for ; Mon, 18 Dec 2017 17:19:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F140428AEA for ; Mon, 18 Dec 2017 17:19:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E61EC28C13; Mon, 18 Dec 2017 17:19:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82F1128AEA for ; Mon, 18 Dec 2017 17:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S937225AbdLRRS2 (ORCPT ); Mon, 18 Dec 2017 12:18:28 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48602 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965524AbdLRRSU (ORCPT ); Mon, 18 Dec 2017 12:18:20 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0995DC058EDE; Mon, 18 Dec 2017 17:18:20 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.43.2.155]) by smtp.corp.redhat.com (Postfix) with ESMTP id 914C77F783; Mon, 18 Dec 2017 17:18:13 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: x86@kernel.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley (EOSG)" , Mohammed Gamal , Cathy Avery , Bandan Das , Roman Kagan , linux-kernel@vger.kernel.org, devel@linuxdriverproject.org Subject: [PATCH RFC 6/7] KVM: nVMX: add enlightened VMCS state Date: Mon, 18 Dec 2017 18:17:41 +0100 Message-Id: <20171218171742.5765-7-vkuznets@redhat.com> In-Reply-To: <20171218171742.5765-1-vkuznets@redhat.com> References: <20171218171742.5765-1-vkuznets@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 18 Dec 2017 17:18:20 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ladi Prosek Adds two bool fields and implements copy_enlightened_to_vmcs12() and copy_enlightened_to_vmcs12(). Unlike shadow VMCS, enlightened VMCS is para-virtual and active only if the nested guest explicitly enables it. The pattern repeating itself a few times throughout this patch: if (vmx->nested.enlightened_vmcs_active) { /* enlightened! */ } else if (enable_shadow_vmcs) { /* fall-back */ } reflects this. If the nested guest elects to not use enlightened VMCS, the regular HW-assisted shadow VMCS feature is used, if enabled. enlightened_vmcs_active is never going to be true if enlightened_vmcs_enabled is not set. Signed-off-by: Ladi Prosek Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/vmx.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 52 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 320bb6670413..00b4a362351d 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -503,6 +503,16 @@ struct nested_vmx { * on what the enlightened VMCS supports. */ bool enlightened_vmcs_enabled; + /* + * Indicates that the nested hypervisor performed the last vmentry with + * a Hyper-V enlightened VMCS. + */ + bool enlightened_vmcs_active; + + /* + * Indicates that the enlightened VMCS must be synced with vmcs12 + */ + bool sync_enlightened_vmcs; /* vmcs02_list cache of VMCSs recently used to run L2 guests */ struct list_head vmcs02_pool; @@ -991,6 +1001,7 @@ static void vmx_get_segment(struct kvm_vcpu *vcpu, static bool guest_state_valid(struct kvm_vcpu *vcpu); static u32 vmx_segment_access_rights(struct kvm_segment *var); static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx); +static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx); static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu); static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked); static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12, @@ -7455,7 +7466,10 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) if (vmx->nested.current_vmptr == -1ull) return; - if (enable_shadow_vmcs) { + if (vmx->nested.enlightened_vmcs_active) { + copy_enlightened_to_vmcs12(vmx); + vmx->nested.sync_enlightened_vmcs = false; + } else if (enable_shadow_vmcs) { /* copy to memory all shadowed fields in case they were modified */ copy_shadow_to_vmcs12(vmx); @@ -7642,6 +7656,20 @@ static inline int vmcs12_write_any(struct kvm_vcpu *vcpu, } +static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx) +{ + kvm_vcpu_read_guest_page(&vmx->vcpu, + vmx->nested.current_vmptr >> PAGE_SHIFT, + vmx->nested.cached_vmcs12, 0, VMCS12_SIZE); +} + +static void copy_vmcs12_to_enlightened(struct vcpu_vmx *vmx) +{ + kvm_vcpu_write_guest_page(&vmx->vcpu, + vmx->nested.current_vmptr >> PAGE_SHIFT, + vmx->nested.cached_vmcs12, 0, VMCS12_SIZE); +} + static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx) { int i; @@ -7841,7 +7869,9 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) static void set_current_vmptr(struct vcpu_vmx *vmx, gpa_t vmptr) { vmx->nested.current_vmptr = vmptr; - if (enable_shadow_vmcs) { + if (vmx->nested.enlightened_vmcs_active) { + vmx->nested.sync_enlightened_vmcs = true; + } else if (enable_shadow_vmcs) { vmcs_set_bits(SECONDARY_VM_EXEC_CONTROL, SECONDARY_EXEC_SHADOW_VMCS); vmcs_write64(VMCS_LINK_POINTER, @@ -9396,7 +9426,10 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) vmcs_write32(PLE_WINDOW, vmx->ple_window); } - if (vmx->nested.sync_shadow_vmcs) { + if (vmx->nested.sync_enlightened_vmcs) { + copy_vmcs12_to_enlightened(vmx); + vmx->nested.sync_enlightened_vmcs = false; + } else if (vmx->nested.sync_shadow_vmcs) { copy_vmcs12_to_shadow(vmx); vmx->nested.sync_shadow_vmcs = false; } @@ -11017,7 +11050,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) vmcs12 = get_vmcs12(vcpu); - if (enable_shadow_vmcs) + if (vmx->nested.enlightened_vmcs_active) + copy_enlightened_to_vmcs12(vmx); + else if (enable_shadow_vmcs) copy_shadow_to_vmcs12(vmx); /* @@ -11634,8 +11669,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, */ kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu); - if (enable_shadow_vmcs && exit_reason != -1) - vmx->nested.sync_shadow_vmcs = true; + if (exit_reason != -1) { + if (vmx->nested.enlightened_vmcs_active) + vmx->nested.sync_enlightened_vmcs = true; + else if (enable_shadow_vmcs) + vmx->nested.sync_shadow_vmcs = true; + } /* in case we halted in L2 */ vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; @@ -11714,12 +11753,17 @@ static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, u32 reason, unsigned long qualification) { + struct vcpu_vmx *vmx = to_vmx(vcpu); + load_vmcs12_host_state(vcpu, vmcs12); vmcs12->vm_exit_reason = reason | VMX_EXIT_REASONS_FAILED_VMENTRY; vmcs12->exit_qualification = qualification; nested_vmx_succeed(vcpu); - if (enable_shadow_vmcs) - to_vmx(vcpu)->nested.sync_shadow_vmcs = true; + + if (vmx->nested.enlightened_vmcs_active) + vmx->nested.sync_enlightened_vmcs = true; + else if (enable_shadow_vmcs) + vmx->nested.sync_shadow_vmcs = true; } static int vmx_check_intercept(struct kvm_vcpu *vcpu,