From patchwork Wed Oct 31 13:26:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10662681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 227023CF1 for ; Wed, 31 Oct 2018 13:26:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10E41284BD for ; Wed, 31 Oct 2018 13:26:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0419628AA4; Wed, 31 Oct 2018 13:26:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A81428935 for ; Wed, 31 Oct 2018 13:26:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D2D06B026B; Wed, 31 Oct 2018 09:26:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4A71C6B026D; Wed, 31 Oct 2018 09:26:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 321D46B026E; Wed, 31 Oct 2018 09:26:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id 029AE6B026B for ; Wed, 31 Oct 2018 09:26:46 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id u28-v6so16980512qtu.3 for ; Wed, 31 Oct 2018 06:26:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=uuP0VVlUnqXp2dMHDoSAgZr4dF/qtgdpqgl8mRbBnkk=; b=dElOSLlYFcrh4wqeVlvZ9uZC3qcgmhi1XMy1B6XtSGLl6m9cV2oEWj8jMFZoO89UeE R3mwVUMMOeU+Xmzqa2z4VsFMtRei701cGAcisx2dt1yCrxToK5E0eerX5Lqvdgx+lemM WfN6GmpCtQ85Q7QlgVO0WdwlZv3yZJ92OsVUxpOSKFnZMmKmjT0RKud/3dylRPKkMYKL 5mD+hMgVUNxcp2XasHndjSRafhj56yBDeOAl6jZ2EJ3bBJJkiH1xJ1CDG6HpJtifoTFD 1JlW8U8LzT3NwEtOqCPcbIt2zBnLao9cww5Fy4ZLDsugeHZMwG0/YDTbH23umcOi4y8e FsDQ== X-Gm-Message-State: AGRZ1gL8v0T61HHCBWmldxDu6SOn6X2MfPCaylo9XSWfSErRNFAW9x0n 8OpEQxwCkCNtX4C7PvR20jHUSsUxxuToX9nEP7dGqHxsNW3Wm/HonCQdcfiCs/imNucvzr8UZBR lLpRYmt80j36zgDqqYRY2pV6HnHTj2PImZe3nCfa3GBCVwbgfSF2ONcZcoxrLziGZ2UXL47OMjA scwXysKDVsBkj71LVbF4FfvFkX0VIT92K1T3dY3HnepiYnT/uqDfUBOJQGtxIOmnOs0JQvSeueh yKhPLyyrUcbOfEE8uEHbP218o11KLNu/+QPAi1AWhuFvh1T0PSMSHltlQ1piMc8yx41e9AVZCTt HhiLOvy7KbjU2fuptvTLrGGA1uhplG8iS8GhEtCM1yX4D4DoIy8VvASdIcclNIVTkNvhVPXcoUs z X-Received: by 2002:a0c:d1eb:: with SMTP id k40mr2595488qvh.150.1540992405670; Wed, 31 Oct 2018 06:26:45 -0700 (PDT) X-Received: by 2002:a0c:d1eb:: with SMTP id k40mr2595434qvh.150.1540992404587; Wed, 31 Oct 2018 06:26:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540992404; cv=none; d=google.com; s=arc-20160816; b=piymYcE+Y2gk0y/gLBe0jI1G9yqBlkO6CpIXGo8cxIXo6Rzq/4PUHAKm0H6p71whcu ywftryZXkVQ18xybp3Qpg2g8i3bGLoDgWT8xKk3PxBY59xBlCdXMMeuUAFGKaQEW3FAh 8G/LhAT8NNYWlCSmibmebbKVydVTWEP5rqvhlugIrAdiemB3NG3CCX0aXiwErJAr8nk8 K43YljUaJLumFqFX3RoKyRdWRZ1Yg9nuqAOlSA6rEEHruxeS+k2hszyXF4I1HQnS+Wpa sRjb4GHQbzHFKHYDNW+ZZjtG35de1qzPOa1N5EXoxeK9iuL6g9hu/koMrKQ8m/JkHH76 GuSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=uuP0VVlUnqXp2dMHDoSAgZr4dF/qtgdpqgl8mRbBnkk=; b=z7r7PPjWeuetbbf5S9LYwuqvEeJseiuHRNyyGzqDJA1Bp49/eJ5FyW21gBMR2RBJwy 0pcYDJXdh/HxfD7Yyi8Cjj68Uvf+Zqe7AwTTbwHqDp8Uc0os4yxZ1jYHJ/KoSeMrju7o tstdehYx6WeNKRV5FShrPFdwW5VWTukGQTmLncyNF3VgxX4G9ccXOcjFTdsCK6JoAGSI lupGCCh2fxm/RzeyWlY0WlokynMCguUX4uU4FOxpkmujXUHZmjqNcI0Dm15zD2vkWoHn qu0MZqzj1lwiNZwiEU/lazia3zinrpjXSKw2qrjfC89ziXU3I6u4ChPzulLpQ5dp8DY2 wlgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=F+NB1Rxc; spf=pass (google.com: domain of 3lk3zwwckcdsjxozloodlldib.zljifkru-jjhsxzh.lod@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3lK3ZWwcKCDsjXoZloodlldib.Zljifkru-jjhsXZh.lod@flex--marcorr.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id b127sor3950749qkc.48.2018.10.31.06.26.44 for (Google Transport Security); Wed, 31 Oct 2018 06:26:44 -0700 (PDT) Received-SPF: pass (google.com: domain of 3lk3zwwckcdsjxozloodlldib.zljifkru-jjhsxzh.lod@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=F+NB1Rxc; spf=pass (google.com: domain of 3lk3zwwckcdsjxozloodlldib.zljifkru-jjhsxzh.lod@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3lK3ZWwcKCDsjXoZloodlldib.Zljifkru-jjhsXZh.lod@flex--marcorr.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uuP0VVlUnqXp2dMHDoSAgZr4dF/qtgdpqgl8mRbBnkk=; b=F+NB1RxcQr2bTutSgJwdifRc4zUjLOl3TxTiJHutBqkRq9urjO1g/Ejfoj2m02VRRi mXa/kOSHDf/1S4KttEeQI3hW+yIRaWyYJPtEj/dd2aj7aAHrB26s2t7pkMRg48EnSci0 yqIfk90njcXNVQY1uiKJ68GfDVT5xiWF7pkMfgPTBRoaVTKiWJAhUyl1WlU2wL4toIIn JFjJ/iE4dnPv3Bx0Mt9D7bq70PkLz+bIg3jOY7BsGE/V5dK0Mv+LNlqAJSNn9tKRCUBF HaD5EDUQolCbdGjcBOeO52uO4hdoud3797xftSYdWIOQeDEFhmro/vIcS3OEMiXdi2eP 8F5A== X-Google-Smtp-Source: AJdET5ctl87/dqXf6bdBRw29fQWrPDgKICEXJLDQF68l2QLKEa9Behs/YHIqguLELUiMZXKxyWi7agOifVkj X-Received: by 2002:a37:9a90:: with SMTP id c138mr2150620qke.36.1540992404300; Wed, 31 Oct 2018 06:26:44 -0700 (PDT) Date: Wed, 31 Oct 2018 06:26:32 -0700 In-Reply-To: <20181031132634.50440-1-marcorr@google.com> Message-Id: <20181031132634.50440-3-marcorr@google.com> Mime-Version: 1.0 References: <20181031132634.50440-1-marcorr@google.com> X-Mailer: git-send-email 2.19.1.568.g152ad8e336-goog Subject: [kvm PATCH v5 2/4] kvm: x86: Dynamically allocate guest_fpu From: Marc Orr To: kvm@vger.kernel.org, jmattson@google.com, rientjes@google.com, konrad.wilk@oracle.com, linux-mm@kvack.org, akpm@linux-foundation.org, pbonzini@redhat.com, rkrcmar@redhat.com, willy@infradead.org, sean.j.christopherson@intel.com, dave.hansen@linux.intel.com, kernellwp@gmail.com Cc: Marc Orr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Previously, the guest_fpu field was embedded in the kvm_vcpu_arch struct. Unfortunately, the field is quite large, (e.g., 4352 bytes on my current setup). This bloats the kvm_vcpu_arch struct for x86 into an order 3 memory allocation, which can become a problem on overcommitted machines. Thus, this patch moves the fpu state outside of the kvm_vcpu_arch struct. With this patch applied, the kvm_vcpu_arch struct is reduced to 15168 bytes for vmx on my setup when building the kernel with kvmconfig. Suggested-by: Dave Hansen Signed-off-by: Marc Orr --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/svm.c | 10 ++++++++ arch/x86/kvm/vmx.c | 10 ++++++++ arch/x86/kvm/x86.c | 45 +++++++++++++++++++++++---------- 4 files changed, 54 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ebb1d7a755d4..c8a2a263f91f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -610,7 +610,7 @@ struct kvm_vcpu_arch { * "guest_fpu" state here contains the guest FPU context, with the * host PRKU bits. */ - struct fpu guest_fpu; + struct fpu *guest_fpu; u64 xcr0; u64 guest_supported_xcr0; @@ -1194,6 +1194,7 @@ struct kvm_arch_async_pf { }; extern struct kvm_x86_ops *kvm_x86_ops; +extern struct kmem_cache *x86_fpu_cache; #define __KVM_HAVE_ARCH_VM_ALLOC static inline struct kvm *kvm_arch_alloc_vm(void) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index f416f5c7f2ae..ac0c52ca22c6 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2121,6 +2121,13 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) goto out; } + svm->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache, GFP_KERNEL); + if (!svm->vcpu.arch.guest_fpu) { + printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n"); + err = -ENOMEM; + goto free_partial_svm; + } + err = kvm_vcpu_init(&svm->vcpu, kvm, id); if (err) goto free_svm; @@ -2180,6 +2187,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) uninit: kvm_vcpu_uninit(&svm->vcpu); free_svm: + kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu); +free_partial_svm: kmem_cache_free(kvm_vcpu_cache, svm); out: return ERR_PTR(err); @@ -2194,6 +2203,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_page(virt_to_page(svm->nested.hsave)); __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); kvm_vcpu_uninit(vcpu); + kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu); kmem_cache_free(kvm_vcpu_cache, svm); /* * The vmcb page can be recycled, causing a false negative in diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index abeeb45d1c33..4078cf15a4b0 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11476,6 +11476,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu) free_loaded_vmcs(vmx->loaded_vmcs); kfree(vmx->guest_msrs); kvm_vcpu_uninit(vcpu); + kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); kmem_cache_free(kvm_vcpu_cache, vmx); } @@ -11489,6 +11490,13 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) if (!vmx) return ERR_PTR(-ENOMEM); + vmx->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache, GFP_KERNEL); + if (!vmx->vcpu.arch.guest_fpu) { + printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n"); + err = -ENOMEM; + goto free_partial_vcpu; + } + vmx->vpid = allocate_vpid(); err = kvm_vcpu_init(&vmx->vcpu, kvm, id); @@ -11576,6 +11584,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) kvm_vcpu_uninit(&vmx->vcpu); free_vcpu: free_vpid(vmx->vpid); + kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); +free_partial_vcpu: kmem_cache_free(kvm_vcpu_cache, vmx); return ERR_PTR(err); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ff77514f7367..420516f0749a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -213,6 +213,9 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { u64 __read_mostly host_xcr0; +struct kmem_cache *x86_fpu_cache; +EXPORT_SYMBOL_GPL(x86_fpu_cache); + static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt); static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu) @@ -3635,7 +3638,7 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) { - struct xregs_state *xsave = &vcpu->arch.guest_fpu.state.xsave; + struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; u64 xstate_bv = xsave->header.xfeatures; u64 valid; @@ -3677,7 +3680,7 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) static void load_xsave(struct kvm_vcpu *vcpu, u8 *src) { - struct xregs_state *xsave = &vcpu->arch.guest_fpu.state.xsave; + struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET); u64 valid; @@ -3725,7 +3728,7 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, fill_xsave((u8 *) guest_xsave->region, vcpu); } else { memcpy(guest_xsave->region, - &vcpu->arch.guest_fpu.state.fxsave, + &vcpu->arch.guest_fpu->state.fxsave, sizeof(struct fxregs_state)); *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = XFEATURE_MASK_FPSSE; @@ -3755,7 +3758,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, if (xstate_bv & ~XFEATURE_MASK_FPSSE || mxcsr & ~mxcsr_feature_mask) return -EINVAL; - memcpy(&vcpu->arch.guest_fpu.state.fxsave, + memcpy(&vcpu->arch.guest_fpu->state.fxsave, guest_xsave->region, sizeof(struct fxregs_state)); } return 0; @@ -6819,10 +6822,23 @@ int kvm_arch_init(void *opaque) } r = -ENOMEM; + x86_fpu_cache = kmem_cache_create_usercopy( + "x86_fpu", + sizeof(struct fpu), + __alignof__(struct fpu), + SLAB_ACCOUNT, + offsetof(struct fpu, state), + sizeof_field(struct fpu, state), + NULL); + if (!x86_fpu_cache) { + printk(KERN_ERR "kvm: failed to allocate cache for x86 fpu\n"); + goto out; + } + shared_msrs = alloc_percpu(struct kvm_shared_msrs); if (!shared_msrs) { printk(KERN_ERR "kvm: failed to allocate percpu kvm_shared_msrs\n"); - goto out; + goto out_free_x86_fpu_cache; } r = kvm_mmu_module_init(); @@ -6855,6 +6871,8 @@ int kvm_arch_init(void *opaque) out_free_percpu: free_percpu(shared_msrs); +out_free_x86_fpu_cache: + kmem_cache_destroy(x86_fpu_cache); out: return r; } @@ -6878,6 +6896,7 @@ void kvm_arch_exit(void) kvm_x86_ops = NULL; kvm_mmu_module_exit(); free_percpu(shared_msrs); + kmem_cache_destroy(x86_fpu_cache); } int kvm_vcpu_halt(struct kvm_vcpu *vcpu) @@ -8001,7 +8020,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) preempt_disable(); copy_fpregs_to_fpstate(¤t->thread.fpu); /* PKRU is separately restored in kvm_x86_ops->run. */ - __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, + __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state, ~XFEATURE_MASK_PKRU); preempt_enable(); trace_kvm_fpu(1); @@ -8011,7 +8030,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { preempt_disable(); - copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu); + copy_fpregs_to_fpstate(vcpu->arch.guest_fpu); copy_kernel_to_fpregs(¤t->thread.fpu.state); preempt_enable(); ++vcpu->stat.fpu_reload; @@ -8506,7 +8525,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) vcpu_load(vcpu); - fxsave = &vcpu->arch.guest_fpu.state.fxsave; + fxsave = &vcpu->arch.guest_fpu->state.fxsave; memcpy(fpu->fpr, fxsave->st_space, 128); fpu->fcw = fxsave->cwd; fpu->fsw = fxsave->swd; @@ -8526,7 +8545,7 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) vcpu_load(vcpu); - fxsave = &vcpu->arch.guest_fpu.state.fxsave; + fxsave = &vcpu->arch.guest_fpu->state.fxsave; memcpy(fxsave->st_space, fpu->fpr, 128); fxsave->cwd = fpu->fcw; @@ -8582,9 +8601,9 @@ static int sync_regs(struct kvm_vcpu *vcpu) static void fx_init(struct kvm_vcpu *vcpu) { - fpstate_init(&vcpu->arch.guest_fpu.state); + fpstate_init(&vcpu->arch.guest_fpu->state); if (boot_cpu_has(X86_FEATURE_XSAVES)) - vcpu->arch.guest_fpu.state.xsave.header.xcomp_bv = + vcpu->arch.guest_fpu->state.xsave.header.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED; /* @@ -8708,11 +8727,11 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) */ if (init_event) kvm_put_guest_fpu(vcpu); - mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu.state.xsave, + mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu->state.xsave, XFEATURE_MASK_BNDREGS); if (mpx_state_buffer) memset(mpx_state_buffer, 0, sizeof(struct mpx_bndreg_state)); - mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu.state.xsave, + mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu->state.xsave, XFEATURE_MASK_BNDCSR); if (mpx_state_buffer) memset(mpx_state_buffer, 0, sizeof(struct mpx_bndcsr));