From patchwork Thu Jun 30 22:24:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 9209029 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B75256075A for ; Thu, 30 Jun 2016 22:24:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9DED820649 for ; Thu, 30 Jun 2016 22:24:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 906CA2868A; Thu, 30 Jun 2016 22:24:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F87720649 for ; Thu, 30 Jun 2016 22:24:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752038AbcF3WYy (ORCPT ); Thu, 30 Jun 2016 18:24:54 -0400 Received: from mail-pa0-f45.google.com ([209.85.220.45]:35636 "EHLO mail-pa0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751068AbcF3WYx (ORCPT ); Thu, 30 Jun 2016 18:24:53 -0400 Received: by mail-pa0-f45.google.com with SMTP id hl6so32249300pac.2 for ; Thu, 30 Jun 2016 15:24:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=date:from:to:subject:message-id:mime-version:content-disposition :user-agent; bh=vhgRxeU+LN2kB9tEg+qkU4/RFNAmzGMNua67e0ckiug=; b=bxSqAKE3PPIqbPslzDz7+zDG2KilVpJqakfFCNAeg8nubMCwFR98OJS5ZUTHdW/rkQ GZ2FUzCgFF4D+QIVtR14/icSt5Hni/vg2bRwLI4yakYZdafiKmbmChhNuYO0NubZjv+A HzmcvpVHfx/ChgIa7xIwVDMnLRQoL6GOVs1OPB7tfm7ZiE5if3GyFgylybsWD9i9ipOn i+s2zJx5FEqQE7NTUpEg4RHkqfu00qRYNliP3T6lhsFSiJgivk9TYz7T0Ijqrxj+9ZlJ m5+WDEkcuG3jGKdgJuqLIOLd3Y8PAscz0Cw86qqgmtQ8GU1xQar2M3sBJpm43s0WRciF pEmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:subject:message-id:mime-version :content-disposition:user-agent; bh=vhgRxeU+LN2kB9tEg+qkU4/RFNAmzGMNua67e0ckiug=; b=OBorVk5kG1YWbm21JmyB4EXacfLrayYVqisxNJGv3szgMdl84SEFzqCmM3oie2FFDm VCcHAiwi32VMBihzmBhzm1W0yHW8qTAZc9ODlgSrdm/HeWjM7a0UigM8ssDJ3X6CmBmK 86/A5anWjol7CLwQfhQilJ9upCDnxCJ8dYLK0XNnVnKRYVI9MzHFPrVANhD95Co1P4pc gZkwEh58/1ERO3z45nolD8ZLpVyVXTXeV9a31GrHhSXfUVY2Y4vgqRMzb5o4jZ+VeMLE Sc1tTAQVb2f0c6XY3wAvHqNtrp3IECZT4F7ammwvGRdidy7iQRcxB/aSZsbDE18S9KPz hwAw== X-Gm-Message-State: ALyK8tJLR3Xm4Q892XbDQG13DPSFAlo9ypq3FNXy+3ecsS+wB22gb3ptuhveaFaQS5Dk3xrf X-Received: by 10.66.16.133 with SMTP id g5mr25973601pad.145.1467325479854; Thu, 30 Jun 2016 15:24:39 -0700 (PDT) Received: from jmattson.sea.corp.google.com ([2620:0:1009:11:8143:ec02:16ce:f865]) by smtp.gmail.com with ESMTPSA id d2sm205751pfk.36.2016.06.30.15.24.39 for (version=TLS1_2 cipher=AES128-SHA bits=128/128); Thu, 30 Jun 2016 15:24:39 -0700 (PDT) Date: Thu, 30 Jun 2016 15:24:32 -0700 From: Jim Mattson To: kvm@vger.kernel.org Subject: [PATCH] KVM: nVMX: Fix memory corruption when using VMCS shadowing Message-ID: <20160630222432.GA15431@jmattson.sea.corp.google.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In copy_shadow_to_vmcs12, a vCPU's shadow VMCS is temporarily loaded on a logical processor as the current VMCS. When the copy is complete, the logical processor's previous current VMCS must be restored. However, the logical processor in question may not be the one on which the vCPU is loaded (for instance, during kvm_vm_release, when copy_shadow_to_vmcs12 is invoked on the same logical processor for every vCPU in a VM). The new functions __vmptrst and __vmptrld are introduced to save the logical processor's current VMCS before the copy and to restore it afterwards. Note that copy_vmcs12_to_shadow does not suffer from this problem, since it is only called from a context where the vCPU is loaded on the logical processor. Signed-off-by: Jim Mattson --- arch/x86/include/asm/vmx.h | 1 + arch/x86/kvm/vmx.c | 27 +++++++++++++++++++++++---- 2 files changed, 24 insertions(+), 4 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 14c63c7..2d0548a 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -443,6 +443,7 @@ enum vmcs_field { #define ASM_VMX_VMLAUNCH ".byte 0x0f, 0x01, 0xc2" #define ASM_VMX_VMRESUME ".byte 0x0f, 0x01, 0xc3" #define ASM_VMX_VMPTRLD_RAX ".byte 0x0f, 0xc7, 0x30" +#define ASM_VMX_VMPTRST_RAX ".byte 0x0f, 0xc7, 0x38" #define ASM_VMX_VMREAD_RDX_RAX ".byte 0x0f, 0x78, 0xd0" #define ASM_VMX_VMWRITE_RAX_RDX ".byte 0x0f, 0x79, 0xd0" #define ASM_VMX_VMWRITE_RSP_RDX ".byte 0x0f, 0x79, 0xd4" diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 003618e..c79868a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1338,15 +1338,30 @@ static inline void loaded_vmcs_init(struct loaded_vmcs *loaded_vmcs) loaded_vmcs->launched = 0; } -static void vmcs_load(struct vmcs *vmcs) +static inline u64 __vmptrst(void) +{ + u64 phys_addr; + + asm volatile (__ex(ASM_VMX_VMPTRST_RAX) + : : "a"(&phys_addr) : "cc", "memory"); + return phys_addr; +} + +static inline u8 __vmptrld(u64 phys_addr) { - u64 phys_addr = __pa(vmcs); u8 error; asm volatile (__ex(ASM_VMX_VMPTRLD_RAX) "; setna %0" : "=qm"(error) : "a"(&phys_addr), "m"(phys_addr) : "cc", "memory"); - if (error) + return error; +} + +static void vmcs_load(struct vmcs *vmcs) +{ + u64 phys_addr = __pa(vmcs); + + if (__vmptrld(phys_addr)) printk(KERN_ERR "kvm: vmptrld %p/%llx failed\n", vmcs, phys_addr); } @@ -7136,12 +7151,14 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx) int i; unsigned long field; u64 field_value; + u64 current_vmcs_pa; struct vmcs *shadow_vmcs = vmx->nested.current_shadow_vmcs; const unsigned long *fields = shadow_read_write_fields; const int num_fields = max_shadow_read_write_fields; preempt_disable(); + current_vmcs_pa = __vmptrst(); vmcs_load(shadow_vmcs); for (i = 0; i < num_fields; i++) { @@ -7167,7 +7184,9 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx) } vmcs_clear(shadow_vmcs); - vmcs_load(vmx->loaded_vmcs->vmcs); + if (current_vmcs_pa != -1ull) + __vmptrld(current_vmcs_pa); + preempt_enable(); }