From patchwork Mon May 14 18:58:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10399137 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EE600600D2 for ; Mon, 14 May 2018 19:08:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D8A9A28538 for ; Mon, 14 May 2018 18:59:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9D1428553; Mon, 14 May 2018 18:59:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 354EE28538 for ; Mon, 14 May 2018 18:59:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751961AbeENS7H (ORCPT ); Mon, 14 May 2018 14:59:07 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:42605 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751449AbeENS7G (ORCPT ); Mon, 14 May 2018 14:59:06 -0400 Received: by mail-pf0-f193.google.com with SMTP id p14-v6so6451833pfh.9 for ; Mon, 14 May 2018 11:59:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=i3ANFBPqhjFXn88WpKaWRD+hp7BeTE39AsoH8jC/2hQ=; b=iw47oLMqORdmBk7xNovpXPOkeMlisr6d0p6bKfpfXLxCX8UWU7vIG0S3AvaC8cSMKZ jJYjHuyFxM4Wj/15cKFv38T0ITD4OQ8c2eY/4iiSlqdhzxOFhVUzM3nDz5U36fM1L8Nv k+Wc8gg6bXPvOQ0PrILIoQsz1JW7fuum6l2lTwzPiD85D0O+zppMlwG+fE/6CtjCRT8h UqzgpLXRTWPX/S9pGiRSyhW8DtVY+ufzwA2/bf8EkpYw6W3efhhb/40KxsInCfW78c3I aBIzUbf94LJjWX+4jWQeseLkhPiT9dY9Dhi7meChciLuBbTB/Zv08KbrZBYccYJva/Fh iFlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=i3ANFBPqhjFXn88WpKaWRD+hp7BeTE39AsoH8jC/2hQ=; b=rZ8KUjpwnfBnJ8vP3GcjLxI9yEI29ECKg3WFq6MFf3dSAEwCxT5w52pko3lVMvzHE6 ZcmZE3g6hWIeiS3jyywdt3JORGHkMGkjMY/f3+64hLTqTpJdeu49NI3PAcViTq4ciSqs mxSXJASzIDcx3bG0BOyJ7lVZtALh3do/ODv1Et62+0U2PGKKgypVRm7yChCTMJaU1WqY 1xqakZylBAQYHLX+QF4qDTLHV0xXt61HaXKIlQRTXpTjrRsueRGns1ZCM53k3GWQ+LJP p7si7LcM8IpC19aR/uW6wRbz3HiZK0sDfIy9+SIurUnW0yOK0QHGf7Q1q/X/d28rHN/f uK/g== X-Gm-Message-State: ALKqPwcvgtBLz83YxuTZbDUzrZUkxAjpKi0Drm1uMBkdJUseqaKajFsq Szw0t3Y/2rHOJt2R85ju0v/hHeGIWiM= X-Google-Smtp-Source: AB8JxZoZAl0kHJ8WjS7ma2bs+BdJW6EMOIcP9cYOz8STdex9lb6hQ2e6iXN3W5cSj88Xs3Sdhk67EA== X-Received: by 2002:a62:1083:: with SMTP id 3-v6mr11674502pfq.229.1526324345145; Mon, 14 May 2018 11:59:05 -0700 (PDT) Received: from marcorr-linuxworkstation.sea.corp.google.com ([2620:0:1009:11:58fe:9c9a:2c6c:55d0]) by smtp.gmail.com with ESMTPSA id z16-v6sm13747565pge.90.2018.05.14.11.59.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 14 May 2018 11:59:04 -0700 (PDT) From: Marc Orr To: kvm@vger.kernel.org Cc: marc.zyngier@arm.com, pbonzini@redhat.com, borntraeger@de.ibm.com, paulus@ozlabs.org, frankja@linux.ibm.com, cohuck@redhat.com, David Rientjes , rkrcmar@redhat.com, David Matlack , Jim Mattson , frankja@linux.vnet.ibm.com, jhogan@kernel.org, christoffer.dall@arm.com, Marc Orr Subject: [kvm-unit-tests PATCH v5 1/1] kvm: Make VM ioctl do valloc for some archs Date: Mon, 14 May 2018 11:58:54 -0700 Message-Id: <20180514185854.227076-1-marcorr@google.com> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The kvm struct has been bloating. For example, it's tens of kilo-bytes for x86, which turns out to be a large amount of memory to allocate contiguously via kzalloc. Thus, this patch does the following: 1. Uses architecture-specific routines to allocat the kvm struct via kzalloc for x86 and also for arm when has_tbe() is true. 2. Introduces a baseline allocator for the kvm struct that uses valloc. This allocator can be used by an architecture by defining the __KVM_VALLOC_ARCH_VM macro before including include/linux/kvm_host.h. 3. Finally, continue to default to kalloc for all other situations, as many architectures have taken a dependence on kalloc. Signed-off-by: Marc Orr Change-Id: Ide392e51991142d1325e06351231ac685e11d820 --- arch/arm/include/asm/kvm_host.h | 4 ++++ arch/arm/kvm/guest.c | 16 ++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 4 ++++ arch/arm64/kvm/guest.c | 16 ++++++++++++++++ arch/x86/kvm/svm.c | 4 ++-- arch/x86/kvm/vmx.c | 4 ++-- include/linux/kvm_host.h | 13 +++++++++++++ 7 files changed, 57 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index c7c28c885a19..f38fd60b4d4d 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -318,4 +318,8 @@ static inline bool kvm_arm_harden_branch_predictor(void) static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {} static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {} +#define __KVM_HAVE_ARCH_VM_ALLOC +struct kvm *kvm_arch_alloc_vm(void); +void kvm_arch_free_vm(struct kvm *kvm); + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c index a18f33edc471..2b88b745f5a3 100644 --- a/arch/arm/kvm/guest.c +++ b/arch/arm/kvm/guest.c @@ -42,6 +42,22 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { NULL } }; +struct kvm *kvm_arch_alloc_vm(void) +{ + if (!has_vhe()) + return kzalloc(sizeof(struct kvm), GFP_KERNEL); + + return vzalloc(sizeof(struct kvm)); +} + +void kvm_arch_free_vm(struct kvm *kvm) +{ + if (!has_vhe()) + kfree(kvm); + else + vfree(kvm); +} + int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) { return 0; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 469de8acd06f..358b1b4dcd92 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -455,4 +455,8 @@ static inline bool kvm_arm_harden_branch_predictor(void) void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu); void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu); +#define __KVM_HAVE_ARCH_VM_ALLOC +struct kvm *kvm_arch_alloc_vm(void); +void kvm_arch_free_vm(struct kvm *kvm); + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 56a0260ceb11..8bc898079167 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -47,6 +47,22 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { NULL } }; +struct kvm *kvm_arch_alloc_vm(void) +{ + if (!has_vhe()) + return kzalloc(sizeof(struct kvm), GFP_KERNEL); + + return vzalloc(sizeof(struct kvm)); +} + +void kvm_arch_free_vm(struct kvm *kvm) +{ + if (!has_vhe()) + kfree(kvm); + else + vfree(kvm); +} + int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) { return 0; diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 1fc05e428aba..45b903ed3e5d 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1849,13 +1849,13 @@ static void __unregister_enc_region_locked(struct kvm *kvm, static struct kvm *svm_vm_alloc(void) { - struct kvm_svm *kvm_svm = kzalloc(sizeof(struct kvm_svm), GFP_KERNEL); + struct kvm_svm *kvm_svm = vzalloc(sizeof(struct kvm_svm)); return &kvm_svm->kvm; } static void svm_vm_free(struct kvm *kvm) { - kfree(to_kvm_svm(kvm)); + vfree(to_kvm_svm(kvm)); } static void sev_vm_destroy(struct kvm *kvm) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 3f1696570b41..ee85ba887db6 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9936,13 +9936,13 @@ STACK_FRAME_NON_STANDARD(vmx_vcpu_run); static struct kvm *vmx_vm_alloc(void) { - struct kvm_vmx *kvm_vmx = kzalloc(sizeof(struct kvm_vmx), GFP_KERNEL); + struct kvm_vmx *kvm_vmx = vzalloc(sizeof(struct kvm_vmx)); return &kvm_vmx->kvm; } static void vmx_vm_free(struct kvm *kvm) { - kfree(to_kvm_vmx(kvm)); + vfree(to_kvm_vmx(kvm)); } static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6d6e79c59e68..e3134283321a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -808,6 +809,17 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); #ifndef __KVM_HAVE_ARCH_VM_ALLOC +#ifdef __KVM_VALLOC_ARCH_VM +static inline struct kvm *kvm_arch_alloc_vm(void) +{ + return vzalloc(sizeof(struct kvm)); +} + +static inline void kvm_arch_free_vm(struct kvm *kvm) +{ + vfree(kvm); +} +#else static inline struct kvm *kvm_arch_alloc_vm(void) { return kzalloc(sizeof(struct kvm), GFP_KERNEL); @@ -818,6 +830,7 @@ static inline void kvm_arch_free_vm(struct kvm *kvm) kfree(kvm); } #endif +#endif #ifdef __KVM_HAVE_ARCH_NONCOHERENT_DMA void kvm_arch_register_noncoherent_dma(struct kvm *kvm);