From patchwork Fri Jun 22 23:35:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liran Alon X-Patchwork-Id: 10483319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 86230602CB for ; Fri, 22 Jun 2018 23:36:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 731D728FAF for ; Fri, 22 Jun 2018 23:36:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 67ECF29065; Fri, 22 Jun 2018 23:36:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E03BF28FAF for ; Fri, 22 Jun 2018 23:36:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934271AbeFVXgX (ORCPT ); Fri, 22 Jun 2018 19:36:23 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:57674 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934080AbeFVXgR (ORCPT ); Fri, 22 Jun 2018 19:36:17 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w5MNXro3028377; Fri, 22 Jun 2018 23:36:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2017-10-26; bh=pwUPhvisRwO8gS8zph6G4vkevAP+UfZVgYjO5VPCj4o=; b=CBM1jMF3UQgHAue+fbWZAZdrLZ6FOeOL1MYa+YkUu0PP+Z1rqKPv+6CnVI4ual3Zu3bX nmGfpePZySFnI3O1RCZIbWILAfeGTAyc73aXyNA0Tfa1SQfbQyzY/eYmHEzmVRUou7qJ cK6lfZ9PE1l5k6TfOCM9hb/dFRN0wkPTKDhVP8VXQNmkUZlXttyZaXzSOY1EV9ZL8QTZ rZfGyPJuaf+b4rUHKen5vmnYGDEti5Voy2FcC/Rg0UbboAx0PpegFwd8uqiZM+NwJeWh M+AZw61OmMP+42aUnFWglKSi7LkbiYLC/Ote+hWxxgst5T0yWuaoHefXyg6qHKjbqrnn 4w== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp2130.oracle.com with ESMTP id 2jrp8eupj9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 22 Jun 2018 23:36:05 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w5MNa46D010157 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 22 Jun 2018 23:36:04 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w5MNa4o8020811; Fri, 22 Jun 2018 23:36:04 GMT Received: from liran-pc.mynet (/79.183.72.72) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 22 Jun 2018 16:36:03 -0700 From: Liran Alon To: pbonzini@redhat.com, rkrcmar@redhat.com, kvm@vger.kernel.org Cc: jmattson@google.com, idan.brown@oracle.com, Liran Alon Subject: [PATCH 15/22] KVM: nVMX: Allocate bitmaps for virtualizing VMCS shadowing Date: Sat, 23 Jun 2018 02:35:15 +0300 Message-Id: <1529710522-28315-16-git-send-email-liran.alon@oracle.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1529710522-28315-1-git-send-email-liran.alon@oracle.com> References: <1529710522-28315-1-git-send-email-liran.alon@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8932 signatures=668703 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=760 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1806210000 definitions=main-1806220260 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We can't really use the L1 guest's vmread and vmwrite bitmaps directly, because virtual and physical hardware may not support the same VMCS fields. This change allocates pages for the "cleaned" versions of the L1 guest's vmread and vmwrite bitmaps. These bitmaps do not yet reflect the desires of the L1 hypervisor. All bits are set, so we can enable VMCS shadowing in hardware, and every VMREAD or VMWRITE will result in an VM-exit to L0, which emulates VMCS shadowing as it did before this change. Signed-off-by: Liran Alon Signed-off-by: Jim Mattson --- arch/x86/kvm/vmx.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 69 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 99576c2fa65a..3327bd7fe81f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -675,6 +675,9 @@ struct nested_vmx { bool pi_pending; u16 posted_intr_nv; + unsigned long *vmread_bitmap; + unsigned long *vmwrite_bitmap; + struct hrtimer preemption_timer; bool preemption_timer_expired; @@ -8088,6 +8091,14 @@ static void free_nested(struct vcpu_vmx *vmx) vmcs_clear(vmx->vmcs01.shadow_vmcs); free_vmcs(vmx->vmcs01.shadow_vmcs); vmx->vmcs01.shadow_vmcs = NULL; + if (vmx->nested.vmread_bitmap) { + free_page((unsigned long)vmx->nested.vmread_bitmap); + vmx->nested.vmread_bitmap = NULL; + } + if (vmx->nested.vmwrite_bitmap) { + free_page((unsigned long)vmx->nested.vmwrite_bitmap); + vmx->nested.vmwrite_bitmap = NULL; + } } kfree(vmx->nested.cached_vmcs12); kfree(vmx->nested.cached_shadow_vmcs12); @@ -11313,6 +11324,40 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne return 0; } +/* + * Allocate three pages for virtualizing VMCS shadowing: the shadow + * itself and the vmread and vmwrite bitmaps. These pages are + * allocated when first needed and freed when leaving virtual VMX + * operation. If previously allocated, the existing pages are + * reused. When first allocated, the VMREAD and VMWRITE bitmaps are + * initialized to all 1's. + */ +static int alloc_vmcs_shadowing_pages(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + + if (!alloc_shadow_vmcs(vcpu)) + return -ENOMEM; + + if (!vmx->nested.vmread_bitmap) { + vmx->nested.vmread_bitmap = + (unsigned long *)__get_free_page(GFP_KERNEL); + if (!vmx->nested.vmread_bitmap) + return -ENOMEM; + memset(vmx->nested.vmread_bitmap, 0xff, PAGE_SIZE); + } + + if (!vmx->nested.vmwrite_bitmap) { + vmx->nested.vmwrite_bitmap = + (unsigned long *)__get_free_page(GFP_KERNEL); + if (!vmx->nested.vmwrite_bitmap) + return -ENOMEM; + memset(vmx->nested.vmwrite_bitmap, 0xff, PAGE_SIZE); + } + + return 0; +} + static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -11358,12 +11403,6 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) if (nested_cpu_has_xsaves(vmcs12)) vmcs_write64(XSS_EXIT_BITMAP, vmcs12->xss_exit_bitmap); - if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_SHADOW_VMCS) && - enable_shadow_vmcs && alloc_shadow_vmcs(vcpu)) { - /* TODO: IMPLEMENT */ - } - vmcs_write64(VMCS_LINK_POINTER, -1ull); - if (cpu_has_vmx_posted_intr()) vmcs_write16(POSTED_INTR_NV, POSTED_INTR_NESTED_VECTOR); @@ -11534,6 +11573,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, SECONDARY_EXEC_XSAVES | SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | SECONDARY_EXEC_APIC_REGISTER_VIRT | + SECONDARY_EXEC_SHADOW_VMCS | SECONDARY_EXEC_ENABLE_VMFUNC); if (nested_cpu_has(vmcs12, CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)) { @@ -11542,9 +11582,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, exec_control |= vmcs12_exec_ctrl; } - /* VMCS shadowing for L2 is emulated for now */ - exec_control &= ~SECONDARY_EXEC_SHADOW_VMCS; - if (exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) vmcs_write16(GUEST_INTR_STATUS, vmcs12->guest_intr_status); @@ -11557,6 +11594,29 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, if (exec_control & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) vmcs_write64(APIC_ACCESS_ADDR, -1ull); + /* + * If L0 enables VMCS shadowing and vmcs12 enables + * VMCS shadowing, virtualize VMCS shadowing by + * allocating a shadow VMCS and vmread/vmwrite bitmaps + * for vmcs02. vmread/vmwrite bitmaps are init at this + * point to intercept all vmread/vmwrite. + * + * Otherwise, emulate VMCS shadowing by disabling VMCS + * shadowing at vmcs02 and emulate vmread/vmwrite to + * read/write from/to shadow vmcs12. + */ + if ((exec_control & SECONDARY_EXEC_SHADOW_VMCS) && + enable_shadow_vmcs && !alloc_vmcs_shadowing_pages(vcpu)) { + vmcs_write64(VMCS_LINK_POINTER, + vmcs12->vmcs_link_pointer == -1ull ? + -1ull : __pa(vmx->loaded_vmcs->shadow_vmcs)); + vmcs_write64(VMREAD_BITMAP, __pa(vmx->nested.vmread_bitmap)); + vmcs_write64(VMWRITE_BITMAP, __pa(vmx->nested.vmwrite_bitmap)); + } else { + exec_control &= ~SECONDARY_EXEC_SHADOW_VMCS; + vmcs_write64(VMCS_LINK_POINTER, -1ull); + } + vmcs_write32(SECONDARY_VM_EXEC_CONTROL, exec_control); }