From patchwork Fri Jun 22 23:35:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liran Alon X-Patchwork-Id: 10483317 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BBE5B60388 for ; Fri, 22 Jun 2018 23:36:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA83628FAF for ; Fri, 22 Jun 2018 23:36:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F12E29065; Fri, 22 Jun 2018 23:36:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31D6328FAF for ; Fri, 22 Jun 2018 23:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934352AbeFVXgg (ORCPT ); Fri, 22 Jun 2018 19:36:36 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:57772 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934366AbeFVXg2 (ORCPT ); Fri, 22 Jun 2018 19:36:28 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w5MNYBCv028663; Fri, 22 Jun 2018 23:36:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2017-10-26; bh=2Noy7f9asvk6KcRGIxZuNpfvLCKtgh2zbz1TpcKTUr0=; b=RVzuQGusXTxi0EhwWZHNL3OQtrLvLyb1zdpwSdD/vSWvLpNgDK7W6U+15Qa/0U6EEFMM fs8Z1j8Ie8Png9Y6hS/G1lp15afnWvecEtJTc1Y3RBqy5UIcxS+0ZboOSlu1SvSEvgJt aflLMy9NJ9THHto7OwhQkk5N/9igtZr8IigQ5kN3BWvSGME/kv6frakSDMh5Mjsg+RfH W25NBRyN+a6I0dBkGsfiMm/dtlpW7+Vd5YRzFC7eUduVb5i46JLA+tKEU8ApFBgyBxYf AM3woZ3OIx34zSpTDI0Fgsh29Jj8g7ZuqHkmdvrKKESU8KO7uxUyWG4W45RhsOvs/nq/ Pw== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp2130.oracle.com with ESMTP id 2jrp8eupje-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 22 Jun 2018 23:36:16 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w5MNaF95026005 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 22 Jun 2018 23:36:15 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w5MNaFcN018586; Fri, 22 Jun 2018 23:36:15 GMT Received: from liran-pc.mynet (/79.183.72.72) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 22 Jun 2018 16:36:14 -0700 From: Liran Alon To: pbonzini@redhat.com, rkrcmar@redhat.com, kvm@vger.kernel.org Cc: jmattson@google.com, idan.brown@oracle.com, Liran Alon Subject: [PATCH 21/22] KVM: nVMX: Support VMCS shadowing virtualization Date: Sat, 23 Jun 2018 02:35:21 +0300 Message-Id: <1529710522-28315-22-git-send-email-liran.alon@oracle.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1529710522-28315-1-git-send-email-liran.alon@oracle.com> References: <1529710522-28315-1-git-send-email-liran.alon@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8932 signatures=668703 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=743 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1806210000 definitions=main-1806220260 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP L0 now sets up controls for vmcs02 so that L2 can perform unintercepted VMREADs and VMWRITEs as specified in the vmcs12 controls (though VMCS fields not supported by L0 will still cause VM-exits). A boolean was added to vmx->nested to indicate if vmcs02 use VMCS shadowing to avoid executing VMREADs to deduce it as a performance optimization. Signed-off-by: Liran Alon Signed-off-by: Jim Mattson --- arch/x86/kvm/vmx.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 94922adf6f47..4b63d6bae6bd 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -675,6 +675,7 @@ struct nested_vmx { bool pi_pending; u16 posted_intr_nv; + bool virtualize_shadow_vmcs; unsigned long *vmread_bitmap; unsigned long *vmwrite_bitmap; @@ -10918,6 +10919,8 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12); +static inline bool nested_vmx_setup_shadow_bitmaps(struct kvm_vcpu *vcpu, + struct vmcs12 *vmcs12); static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) @@ -11007,6 +11010,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, else vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL, CPU_BASED_USE_MSR_BITMAPS); + + if (vmx->nested.virtualize_shadow_vmcs) { + if (nested_vmx_setup_shadow_bitmaps(vcpu, vmcs12)) { + copy_shadow_vmcs12_to_shadow_vmcs02(vmx); + } else { + vmx->nested.virtualize_shadow_vmcs = false; + vmx_disable_shadow_vmcs(vmx); + } + } } static void vmx_start_preemption_timer(struct kvm_vcpu *vcpu) @@ -11287,6 +11299,9 @@ static void nested_flush_cached_shadow_vmcs12(struct kvm_vcpu *vcpu, vmcs12->vmcs_link_pointer == -1ull) return; + if (vmx->nested.virtualize_shadow_vmcs) + copy_shadow_vmcs02_to_shadow_vmcs12(to_vmx(vcpu)); + kvm_write_guest(vmx->vcpu.kvm, vmcs12->vmcs_link_pointer, get_shadow_vmcs12(vcpu), VMCS12_SIZE); } @@ -11849,7 +11864,10 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, * VMCS shadowing, virtualize VMCS shadowing by * allocating a shadow VMCS and vmread/vmwrite bitmaps * for vmcs02. vmread/vmwrite bitmaps are init at this - * point to intercept all vmread/vmwrite. + * point to intercept all vmread/vmwrite. Later, + * nested_get_vmcs12_pages() will either update bitmaps to + * handle some vmread/vmwrite by hardware or remove + * VMCS shadowing from vmcs02. * * Otherwise, emulate VMCS shadowing by disabling VMCS * shadowing at vmcs02 and emulate vmread/vmwrite to @@ -11860,10 +11878,12 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, * back to VMCS shadowing emulation if * nested_vmcs_fields_per_group() > BITS_PER_LONG. */ - if ((exec_control & SECONDARY_EXEC_SHADOW_VMCS) && - enable_shadow_vmcs && - nested_vmcs_fields_per_group(vmx) <= BITS_PER_LONG && - !alloc_vmcs_shadowing_pages(vcpu)) { + vmx->nested.virtualize_shadow_vmcs = + (exec_control & SECONDARY_EXEC_SHADOW_VMCS) && + enable_shadow_vmcs && + nested_vmcs_fields_per_group(vmx) <= BITS_PER_LONG && + !alloc_vmcs_shadowing_pages(vcpu); + if (vmx->nested.virtualize_shadow_vmcs) { vmcs_write64(VMCS_LINK_POINTER, vmcs12->vmcs_link_pointer == -1ull ? -1ull : __pa(vmx->loaded_vmcs->shadow_vmcs)); @@ -12236,8 +12256,6 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu) if (prepare_vmcs02(vcpu, vmcs12, &exit_qual)) goto fail; - nested_get_vmcs12_pages(vcpu, vmcs12); - r = EXIT_REASON_MSR_LOAD_FAIL; msr_entry_idx = nested_vmx_load_msr(vcpu, vmcs12->vm_entry_msr_load_addr, @@ -12364,6 +12382,12 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) * exist on destination host yet). */ nested_cache_shadow_vmcs12(vcpu, vmcs12); + /* + * Must be called after nested_cache_shadow_vmcs12() + * because it may internally copy cached shadow vmcs12 + * into shadow vmcs02. + */ + nested_get_vmcs12_pages(vcpu, vmcs12); /* * If we're entering a halted L2 vcpu and the L2 vcpu won't be woken