From patchwork Wed Oct 31 23:49:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10663359 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8EC1A13A4 for ; Wed, 31 Oct 2018 23:49:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 795A12B943 for ; Wed, 31 Oct 2018 23:49:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CF982B9C2; Wed, 31 Oct 2018 23:49:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 085E02B943 for ; Wed, 31 Oct 2018 23:49:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728489AbeKAIuD (ORCPT ); Thu, 1 Nov 2018 04:50:03 -0400 Received: from mail-io1-f74.google.com ([209.85.166.74]:54296 "EHLO mail-io1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725755AbeKAIuC (ORCPT ); Thu, 1 Nov 2018 04:50:02 -0400 Received: by mail-io1-f74.google.com with SMTP id q26-v6so15837721ioi.21 for ; Wed, 31 Oct 2018 16:49:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sag5MWmUMVeJlNMpxbgpOCQmQo4Ok3FbocF8rLJTZCg=; b=rqTZMcavNyyovzMSxcAu4EhLXbC9QzFt8tV/BXPWZL+RZIpxT5xlT7Of078dzGWhLz uoFQiXGyj3ZH/ifAt0o5iT0klxRvip96jBej7PRQcldWXZ9fWWpEkHdOXSrRine4DhHB W7ozWF+PT6mt1UgsM0GzhGGDVkLwPtFInebqB6DrzW1PVFu9ERTj4NdqAs6Ar1REbW4W KGXweL1v5a9gIhf2QZB0li/KYj+nY8pJ6upsnTFMzfCB24M6AJ3kSrU33jwl+cGbZ7Hn BoKB+4MWRNEv69cneVnU4PnoXP4QkOt1O4dcuc3vM4m+vyaa31uF0Q+TTG93RC5MNXaz TRvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sag5MWmUMVeJlNMpxbgpOCQmQo4Ok3FbocF8rLJTZCg=; b=FfOz56+j9nfrEdf6fSxb3mDLfxpdvYPy+b/l/F3z28pnyhXHMBFlLAlbwVT+0cjVtY EwSOXTrw2k2SMXQw97CbH/aprmh9drd05VfT1QaBwq5QW7Nwn6nNdDBeUUHfOmhhKlsE 9mYSE/KosgUGi5alTUahvM/+v8NnZYx0aVfxTQQyvcWnrhMTumyF3cJbFUA30xxSuq3j 19tCSLsgtQTHDQEzyD3XUcvec63RU9lNDYb3lwxl+QU64fVKh5EVyU4y4wL3oWcSqrl+ GUdKFnWWoDMiiT8htlA33c1KirJ1+Zhtyc2ZfQafQ4tSx3OdvoZJ+bKMnnlbm1lleQY+ X4oQ== X-Gm-Message-State: AGRZ1gIHZe2faFaN8jK9g2uw9/Q8M33KdE2xEsTq6obpGS+e+6OhrKm8 2b1OBnPRfLLl9LwKQ5qzybQMorEwybk7zPqDwBgAfNXlir/qnPxWPvoMaG+tquMx7dpHYOdPJs+ GWwzU8aHaaVJMJ6R6Al19+XdO1ewswnFNjVTiLlD/EKVFogJMIEfDtMtEmJWu X-Google-Smtp-Source: AJdET5flk7ooqaejTMsXXqjIwACRiUCqESKS/OHGre4mffs1Ve/56ZxSoyJK3Tdl6GTp4Hs9n+94pKlke+af X-Received: by 2002:a05:660c:145:: with SMTP id r5mr1666008itk.25.1541029777968; Wed, 31 Oct 2018 16:49:37 -0700 (PDT) Date: Wed, 31 Oct 2018 16:49:27 -0700 In-Reply-To: <20181031234928.144206-1-marcorr@google.com> Message-Id: <20181031234928.144206-2-marcorr@google.com> Mime-Version: 1.0 References: <20181031234928.144206-1-marcorr@google.com> X-Mailer: git-send-email 2.19.1.568.g152ad8e336-goog Subject: [kvm PATCH v6 1/2] kvm: x86: Use task structs fpu field for user From: Marc Orr To: kvm@vger.kernel.org, jmattson@google.com, rientjes@google.com, konrad.wilk@oracle.com, linux-mm@kvack.org, akpm@linux-foundation.org, pbonzini@redhat.com, rkrcmar@redhat.com, willy@infradead.org, sean.j.christopherson@intel.com, dave.hansen@linux.intel.com, kernellwp@gmail.com Cc: Marc Orr Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously, x86's instantiation of 'struct kvm_vcpu_arch' added an fpu field to save/restore fpu-related architectural state, which will differ from kvm's fpu state. However, this is redundant to the 'struct fpu' field, called fpu, embedded in the task struct, via the thread field. Thus, this patch removes the user_fpu field from the kvm_vcpu_arch struct and replaces it with the task struct's fpu field. This change is significant because the fpu struct is actually quite large. For example, on the system used to develop this patch, this change reduces the size of the vcpu_vmx struct from 23680 bytes down to 19520 bytes, when building the kernel with kvmconfig. This reduction in the size of the vcpu_vmx struct moves us closer to being able to allocate the struct at order 2, rather than order 3. Suggested-by: Dave Hansen Signed-off-by: Marc Orr --- arch/x86/include/asm/kvm_host.h | 7 +++---- arch/x86/kvm/x86.c | 4 ++-- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 55e51ff7e421..ebb1d7a755d4 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -601,16 +601,15 @@ struct kvm_vcpu_arch { /* * QEMU userspace and the guest each have their own FPU state. - * In vcpu_run, we switch between the user and guest FPU contexts. - * While running a VCPU, the VCPU thread will have the guest FPU - * context. + * In vcpu_run, we switch between the user, maintained in the + * task_struct struct, and guest FPU contexts. While running a VCPU, + * the VCPU thread will have the guest FPU context. * * Note that while the PKRU state lives inside the fpu registers, * it is switched out separately at VMENTER and VMEXIT time. The * "guest_fpu" state here contains the guest FPU context, with the * host PRKU bits. */ - struct fpu user_fpu; struct fpu guest_fpu; u64 xcr0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bdcb5babfb68..ff77514f7367 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7999,7 +7999,7 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu) static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) { preempt_disable(); - copy_fpregs_to_fpstate(&vcpu->arch.user_fpu); + copy_fpregs_to_fpstate(¤t->thread.fpu); /* PKRU is separately restored in kvm_x86_ops->run. */ __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, ~XFEATURE_MASK_PKRU); @@ -8012,7 +8012,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { preempt_disable(); copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu); - copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state); + copy_kernel_to_fpregs(¤t->thread.fpu.state); preempt_enable(); ++vcpu->stat.fpu_reload; trace_kvm_fpu(0); From patchwork Wed Oct 31 23:49:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10663361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1F4214E2 for ; Wed, 31 Oct 2018 23:49:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DAD712B943 for ; Wed, 31 Oct 2018 23:49:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CE8FD2B9C4; Wed, 31 Oct 2018 23:49:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E8592B943 for ; Wed, 31 Oct 2018 23:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728491AbeKAIuG (ORCPT ); Thu, 1 Nov 2018 04:50:06 -0400 Received: from mail-it1-f201.google.com ([209.85.166.201]:37443 "EHLO mail-it1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728460AbeKAIuF (ORCPT ); Thu, 1 Nov 2018 04:50:05 -0400 Received: by mail-it1-f201.google.com with SMTP id m8-v6so19299318iti.2 for ; Wed, 31 Oct 2018 16:49:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WMVXvjvDE6txZ/O2GOpWi1VP8+XymrVuONZMkJe8jcY=; b=QZua8f31BOkROksxGW/NRT5C6hHCPEdF9q1CT15UQ1eMwTo88WIXdetktCciF4rRXc 7gZIAeZ9d+4VdhoOqN24NPcjiFCT1VzFwdyY26RVryDqXYgIWz3Sdqjwd3OfCtXrtyFf GPnbRXw1YhN9Cb/zFNJ6l6Aw0PN5nLJchF0dv41DoLTXsFiQklw7sQCIAH+ILKo7EhzJ mzZAWgOMDsXi+3w32tCsaypoH+9wqL2Gf3ZaX9yXjGme1DHyKCG+vk77Oy2b6PJ0w7fb VBcUTDxoLySQjBAqVHUWqttD+JZ8TQT8BEUSvZmpIcB7ROMSo5W3MSDgoE/nJ0jHv4Oa VkVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WMVXvjvDE6txZ/O2GOpWi1VP8+XymrVuONZMkJe8jcY=; b=X9InLEXJqvHQUn37xMM09aq5GOj9dEMD9swjTftA7bASxTfYjIaL8La482AJmsitx/ LMdE4Tyu4caPYxK2SVPt+NiC5jOP407OeWBRVfTXhb02hhIu0AsF+W6KHZcauqe4rEZX wRk1DY+TzudvQ2/74SDiLtCb8kFT2IKbL5pf3d+pJRd3rcJ+Tofm2bW+iiEZYEdGRhzK zISX2P4GyAkWZYOJ9DVH2nTcd9EhSwgKE5hCH+IwhVXLh02ve7dwk5o5NrHa5Z2l5me1 bAK0ozv1jiICJqoEHDAOk9JPqCksk/IdLL00SSSusXhhwuwppjraqTam6BACQ30f/Wgt qOXg== X-Gm-Message-State: AGRZ1gJWnlgaVMSpRGBAYY8mwyfBZyBBBnkS+zZKFP8vv6TqRhJ/QKiA Ozdj5Haptwce/C1Kei86g3jW7uGd+fdZ6D9CRP9SCbzGU9gljGLWcxsjMQMi1kNhcVYHhv0T83c 7Yjj5C3cPfwbyysGAY0J+6rj9tJpL3/cAmRK2zsF14XTz0CkH/rpHcBLNCbDF X-Google-Smtp-Source: AJdET5dVBQPSD2IAu3CRkhCN+G1QJTRWUXGZLcixbi6SOlK0nkshX8+rWJfVRWOs4J0rcUYSdxxX5jvcnhzo X-Received: by 2002:a24:5e8a:: with SMTP id h132-v6mr3727305itb.8.1541029781393; Wed, 31 Oct 2018 16:49:41 -0700 (PDT) Date: Wed, 31 Oct 2018 16:49:28 -0700 In-Reply-To: <20181031234928.144206-1-marcorr@google.com> Message-Id: <20181031234928.144206-3-marcorr@google.com> Mime-Version: 1.0 References: <20181031234928.144206-1-marcorr@google.com> X-Mailer: git-send-email 2.19.1.568.g152ad8e336-goog Subject: [kvm PATCH v6 2/2] kvm: x86: Dynamically allocate guest_fpu From: Marc Orr To: kvm@vger.kernel.org, jmattson@google.com, rientjes@google.com, konrad.wilk@oracle.com, linux-mm@kvack.org, akpm@linux-foundation.org, pbonzini@redhat.com, rkrcmar@redhat.com, willy@infradead.org, sean.j.christopherson@intel.com, dave.hansen@linux.intel.com, kernellwp@gmail.com Cc: Marc Orr Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously, the guest_fpu field was embedded in the kvm_vcpu_arch struct. Unfortunately, the field is quite large, (e.g., 4352 bytes on my current setup). This bloats the kvm_vcpu_arch struct for x86 into an order 3 memory allocation, which can become a problem on overcommitted machines. Thus, this patch moves the fpu state outside of the kvm_vcpu_arch struct. With this patch applied, the kvm_vcpu_arch struct is reduced to 15168 bytes for vmx on my setup when building the kernel with kvmconfig. Suggested-by: Dave Hansen Signed-off-by: Marc Orr --- arch/x86/include/asm/kvm_host.h | 3 +- arch/x86/kvm/svm.c | 10 +++++++ arch/x86/kvm/vmx.c | 10 +++++++ arch/x86/kvm/x86.c | 51 ++++++++++++++++++++++++--------- 4 files changed, 60 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ebb1d7a755d4..c8a2a263f91f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -610,7 +610,7 @@ struct kvm_vcpu_arch { * "guest_fpu" state here contains the guest FPU context, with the * host PRKU bits. */ - struct fpu guest_fpu; + struct fpu *guest_fpu; u64 xcr0; u64 guest_supported_xcr0; @@ -1194,6 +1194,7 @@ struct kvm_arch_async_pf { }; extern struct kvm_x86_ops *kvm_x86_ops; +extern struct kmem_cache *x86_fpu_cache; #define __KVM_HAVE_ARCH_VM_ALLOC static inline struct kvm *kvm_arch_alloc_vm(void) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index f416f5c7f2ae..ac0c52ca22c6 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2121,6 +2121,13 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) goto out; } + svm->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache, GFP_KERNEL); + if (!svm->vcpu.arch.guest_fpu) { + printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n"); + err = -ENOMEM; + goto free_partial_svm; + } + err = kvm_vcpu_init(&svm->vcpu, kvm, id); if (err) goto free_svm; @@ -2180,6 +2187,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) uninit: kvm_vcpu_uninit(&svm->vcpu); free_svm: + kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu); +free_partial_svm: kmem_cache_free(kvm_vcpu_cache, svm); out: return ERR_PTR(err); @@ -2194,6 +2203,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_page(virt_to_page(svm->nested.hsave)); __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); kvm_vcpu_uninit(vcpu); + kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu); kmem_cache_free(kvm_vcpu_cache, svm); /* * The vmcb page can be recycled, causing a false negative in diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index abeeb45d1c33..4078cf15a4b0 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11476,6 +11476,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu) free_loaded_vmcs(vmx->loaded_vmcs); kfree(vmx->guest_msrs); kvm_vcpu_uninit(vcpu); + kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); kmem_cache_free(kvm_vcpu_cache, vmx); } @@ -11489,6 +11490,13 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) if (!vmx) return ERR_PTR(-ENOMEM); + vmx->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache, GFP_KERNEL); + if (!vmx->vcpu.arch.guest_fpu) { + printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n"); + err = -ENOMEM; + goto free_partial_vcpu; + } + vmx->vpid = allocate_vpid(); err = kvm_vcpu_init(&vmx->vcpu, kvm, id); @@ -11576,6 +11584,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) kvm_vcpu_uninit(&vmx->vcpu); free_vcpu: free_vpid(vmx->vpid); + kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); +free_partial_vcpu: kmem_cache_free(kvm_vcpu_cache, vmx); return ERR_PTR(err); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ff77514f7367..8abe058f48d9 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -213,6 +213,9 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { u64 __read_mostly host_xcr0; +struct kmem_cache *x86_fpu_cache; +EXPORT_SYMBOL_GPL(x86_fpu_cache); + static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt); static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu) @@ -3635,7 +3638,7 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) { - struct xregs_state *xsave = &vcpu->arch.guest_fpu.state.xsave; + struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; u64 xstate_bv = xsave->header.xfeatures; u64 valid; @@ -3677,7 +3680,7 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) static void load_xsave(struct kvm_vcpu *vcpu, u8 *src) { - struct xregs_state *xsave = &vcpu->arch.guest_fpu.state.xsave; + struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET); u64 valid; @@ -3725,7 +3728,7 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, fill_xsave((u8 *) guest_xsave->region, vcpu); } else { memcpy(guest_xsave->region, - &vcpu->arch.guest_fpu.state.fxsave, + &vcpu->arch.guest_fpu->state.fxsave, sizeof(struct fxregs_state)); *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = XFEATURE_MASK_FPSSE; @@ -3755,7 +3758,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, if (xstate_bv & ~XFEATURE_MASK_FPSSE || mxcsr & ~mxcsr_feature_mask) return -EINVAL; - memcpy(&vcpu->arch.guest_fpu.state.fxsave, + memcpy(&vcpu->arch.guest_fpu->state.fxsave, guest_xsave->region, sizeof(struct fxregs_state)); } return 0; @@ -6818,11 +6821,30 @@ int kvm_arch_init(void *opaque) goto out; } + if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) { + printk(KERN_ERR "kvm: inadequate fpu\n"); + r = -EOPNOTSUPP; + goto out; + } + r = -ENOMEM; + x86_fpu_cache = kmem_cache_create_usercopy( + "x86_fpu", + fpu_kernel_xstate_size, + __alignof__(struct fpu), + SLAB_ACCOUNT, + offsetof(struct fpu, state), + fpu_kernel_xstate_size, + NULL); + if (!x86_fpu_cache) { + printk(KERN_ERR "kvm: failed to allocate cache for x86 fpu\n"); + goto out; + } + shared_msrs = alloc_percpu(struct kvm_shared_msrs); if (!shared_msrs) { printk(KERN_ERR "kvm: failed to allocate percpu kvm_shared_msrs\n"); - goto out; + goto out_free_x86_fpu_cache; } r = kvm_mmu_module_init(); @@ -6855,6 +6877,8 @@ int kvm_arch_init(void *opaque) out_free_percpu: free_percpu(shared_msrs); +out_free_x86_fpu_cache: + kmem_cache_destroy(x86_fpu_cache); out: return r; } @@ -6878,6 +6902,7 @@ void kvm_arch_exit(void) kvm_x86_ops = NULL; kvm_mmu_module_exit(); free_percpu(shared_msrs); + kmem_cache_destroy(x86_fpu_cache); } int kvm_vcpu_halt(struct kvm_vcpu *vcpu) @@ -8001,7 +8026,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) preempt_disable(); copy_fpregs_to_fpstate(¤t->thread.fpu); /* PKRU is separately restored in kvm_x86_ops->run. */ - __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, + __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state, ~XFEATURE_MASK_PKRU); preempt_enable(); trace_kvm_fpu(1); @@ -8011,7 +8036,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { preempt_disable(); - copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu); + copy_fpregs_to_fpstate(vcpu->arch.guest_fpu); copy_kernel_to_fpregs(¤t->thread.fpu.state); preempt_enable(); ++vcpu->stat.fpu_reload; @@ -8506,7 +8531,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) vcpu_load(vcpu); - fxsave = &vcpu->arch.guest_fpu.state.fxsave; + fxsave = &vcpu->arch.guest_fpu->state.fxsave; memcpy(fpu->fpr, fxsave->st_space, 128); fpu->fcw = fxsave->cwd; fpu->fsw = fxsave->swd; @@ -8526,7 +8551,7 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) vcpu_load(vcpu); - fxsave = &vcpu->arch.guest_fpu.state.fxsave; + fxsave = &vcpu->arch.guest_fpu->state.fxsave; memcpy(fxsave->st_space, fpu->fpr, 128); fxsave->cwd = fpu->fcw; @@ -8582,9 +8607,9 @@ static int sync_regs(struct kvm_vcpu *vcpu) static void fx_init(struct kvm_vcpu *vcpu) { - fpstate_init(&vcpu->arch.guest_fpu.state); + fpstate_init(&vcpu->arch.guest_fpu->state); if (boot_cpu_has(X86_FEATURE_XSAVES)) - vcpu->arch.guest_fpu.state.xsave.header.xcomp_bv = + vcpu->arch.guest_fpu->state.xsave.header.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED; /* @@ -8708,11 +8733,11 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) */ if (init_event) kvm_put_guest_fpu(vcpu); - mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu.state.xsave, + mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu->state.xsave, XFEATURE_MASK_BNDREGS); if (mpx_state_buffer) memset(mpx_state_buffer, 0, sizeof(struct mpx_bndreg_state)); - mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu.state.xsave, + mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu->state.xsave, XFEATURE_MASK_BNDCSR); if (mpx_state_buffer) memset(mpx_state_buffer, 0, sizeof(struct mpx_bndcsr));