From patchwork Fri Oct 26 07:58:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Orr X-Patchwork-Id: 10657049 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B717114E2 for ; Fri, 26 Oct 2018 07:59:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC70A2BDB3 for ; Fri, 26 Oct 2018 07:59:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A03262BDBB; Fri, 26 Oct 2018 07:59:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA37F2BDB9 for ; Fri, 26 Oct 2018 07:59:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 844616B02D4; Fri, 26 Oct 2018 03:59:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7EE616B02D6; Fri, 26 Oct 2018 03:59:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DE266B02D7; Fri, 26 Oct 2018 03:59:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id 460916B02D4 for ; Fri, 26 Oct 2018 03:59:11 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id w5-v6so199025qto.18 for ; Fri, 26 Oct 2018 00:59:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=EtieSRL26p9Dx8QeHqJsQHLVJsEfWHnnJP7wqrv7sEY=; b=Lr2BeNiVBFBUDbFeZ8gcKkc+mVYWtVFOfXI667yxPglqFXAeITJDSmsH33yTpHoNvf F+csHwn1NJc1JL1/YGbI+EvOmVmmiENpYp8s9gD9uZl1AnxhliZMlvsscMidAISF3oES OwVEm6xCtdgF7H5nhgmbRCyVbLHBLVAr5ZzphKjptEtbfEyWgHILO06mZdBtD3j5dSFJ F6KW8jJR0dNPQ/n0Ii2xeDVjaQBzWKWn3+xRJiVZZHEkFmySFf0OszKrlReQARMXAKLj 5GTcjV2TskIuamxUvMpCR7GsFuFH1IEHrocGP2B9732M3bsApWZLUPxT5rsPE/+q+14d +w6w== X-Gm-Message-State: AGRZ1gJeJ8LKid0hiTvN2xA9c+xZWPI0tY6Z4gRn7rW2S73p+Xh811hF RHyNmedxKDGf0tZy4EYkP/JPFeZN/Irgaptnzq5ky9taysFy1rBbW01PBQqRABw+aMm+rdEbAw7 KSPDf/O7iZdebn92iAQjDfPJzd7ZiovFqLfgmd+CKiArnLiZXEV7YrtG3oeIqHm6frndEBYtORE wcZIesMCjeI0IWm+AeXrh5gJzpW/n+mHM4meTfTaqwqqWxAp6ZBh/7lTo0KdRv7nxB8cZE1UfO7 oOLNisqVsIdGjIrcKLj7wONG4+LxUpNohkGS4+aKgT8yUInGnXuXdzRO0D3jo1kBPs4P6v8nDCF EbzO29pAP5hIW7GaUbht6AGtZ9WlRyKnYxlSKyqeYUC+PE2GSB86sQVyw/vPngrLH+LDR4Yrp3u k X-Received: by 2002:a0c:9549:: with SMTP id m9mr2306112qvm.214.1540540751004; Fri, 26 Oct 2018 00:59:11 -0700 (PDT) X-Received: by 2002:a0c:9549:: with SMTP id m9mr2306088qvm.214.1540540750363; Fri, 26 Oct 2018 00:59:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540540750; cv=none; d=google.com; s=arc-20160816; b=a533T2wDJ0AJgycwe/uMNou5B+8KsaK7Dq+/O3yt+kjwmzZg3U0McVLKIW7FGeTbWS n08kXdQNgQgIpkdzMKcUROH3EpBU1rfWK1ascAA6F09olKtHbuVR4WJdVzFzmRC+tVOh nk1cAvzeyoYM+aQeKey8CGZFfOZPnH7vLJCL94+TtDzh8HtY5lplSmGigAwy46gLE2C7 Mopd7PA6RMIbCFNKnsC7Awwf1TAfZmtROVFAkB2XvkPg+tpqGdMdl99D2npmQrfXx1uS t5/xUsie5hoUXouLhlqMEzLs1GOX8EfmDGLQmd+W3cxz7GYLXdAX8cA86+NeIDe5OrZU RRHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=EtieSRL26p9Dx8QeHqJsQHLVJsEfWHnnJP7wqrv7sEY=; b=DNxV956U3X+AFKk+sCgzbHW0HKBb1dUow4YCdi3np1BEojN4xu7xJ5SRqyYtKwJ158 rdRQGT7cESnZknG1tTSndFjII6lgZcAIxEGuLrD9Ja6kXJCSosTKdHBKLpWiIt6qMPTL NzRXW8Lbdt6+WjF0EKmoFWP1Ius6JKS+HZzbLRYIHmhcPPpnsVTmEDeQBEFst3TWtsdS zwY+RUQRx31/QTYBzj2lxH6fHHCqhdcs/LUb/IE2uCjU1m/OU4iNSgfR0w2vDcFspXim kP9/Zw0OmonLFYRka5xll22yrTvZ1IZmLDy7U1E2AH+pj7nJkbbFuLAzvJVN+JT/AbSb U7LA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=osAPoFgC; spf=pass (google.com: domain of 3tsnswwckcbe3r8t588x55x2v.t532z4be-331crt1.58x@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3TsnSWwcKCBE3r8t588x55x2v.t532z4BE-331Crt1.58x@flex--marcorr.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id 58-v6sor12688406qts.0.2018.10.26.00.59.10 for (Google Transport Security); Fri, 26 Oct 2018 00:59:10 -0700 (PDT) Received-SPF: pass (google.com: domain of 3tsnswwckcbe3r8t588x55x2v.t532z4be-331crt1.58x@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=osAPoFgC; spf=pass (google.com: domain of 3tsnswwckcbe3r8t588x55x2v.t532z4be-331crt1.58x@flex--marcorr.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3TsnSWwcKCBE3r8t588x55x2v.t532z4BE-331Crt1.58x@flex--marcorr.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EtieSRL26p9Dx8QeHqJsQHLVJsEfWHnnJP7wqrv7sEY=; b=osAPoFgCb9WRIyu/AvbF/ow3GxTjiCHJsmamdPM91bt4LOn2BC3my6ruK3zhefxYb6 VJ5ZFHmSRAappXwER7GI5yj/9WqHRe4LTKELygef2fDWLbmvwaOyBhqinT+cP/JXTxaV RgQ5PfW2R6jWoh2fL+vQRyHJ1gWqBDl2gyotwDyNxNm8/BRM+kAgoYz8Jgft4O8q09Vs IAoVxvheM9Ca115VyneI6clksFWI3otITwsQmBlqZW7bzhbxkAh81rylXaEoRoDO5JXg fy+URw1tIrxeQU3AhYKaaPS9EOT5fcm9kcS3+zdirb6ndgSaXeKNl+vQdmwhluwgWvRV ygBw== X-Google-Smtp-Source: AJdET5dbMdcdovLu6zu8J6pdMR05UJeXumoE8K5O6rwSBG+pALlW8z71I88thZva2wkFC6VXJCk5pLKDJKXf X-Received: by 2002:ac8:1870:: with SMTP id n45-v6mr1995193qtk.46.1540540750136; Fri, 26 Oct 2018 00:59:10 -0700 (PDT) Date: Fri, 26 Oct 2018 00:58:59 -0700 In-Reply-To: <20181026075900.111462-1-marcorr@google.com> Message-Id: <20181026075900.111462-2-marcorr@google.com> Mime-Version: 1.0 References: <20181026075900.111462-1-marcorr@google.com> X-Mailer: git-send-email 2.19.1.568.g152ad8e336-goog Subject: [kvm PATCH v4 1/2] kvm: vmx: refactor vmx_msrs struct for vmalloc From: Marc Orr To: kvm@vger.kernel.org, jmattson@google.com, rientjes@google.com, konrad.wilk@oracle.com, linux-mm@kvack.org, akpm@linux-foundation.org, pbonzini@redhat.com, rkrcmar@redhat.com, willy@infradead.org, sean.j.christopherson@intel.com Cc: Marc Orr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Previously, the vmx_msrs struct relied being aligned within a struct that is backed by the direct map (e.g., memory allocated with kalloc()). Specifically, this enabled the virtual addresses associated with the struct to be translated to physical addresses. However, we'd like to refactor the host struct, vcpu_vmx, to be allocated with vmalloc(), so that allocation will succeed when contiguous physical memory is scarce. Thus, this patch refactors how vmx_msrs is declared and allocated, to ensure that it can be mapped to the physical address space, even when vmx_msrs resides within in a vmalloc()'d struct. Signed-off-by: Marc Orr --- arch/x86/kvm/vmx.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index abeeb45d1c33..3c0303cc101d 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -970,8 +970,25 @@ static inline int pi_test_sn(struct pi_desc *pi_desc) struct vmx_msrs { unsigned int nr; - struct vmx_msr_entry val[NR_AUTOLOAD_MSRS]; + struct vmx_msr_entry *val; }; +struct kmem_cache *vmx_msr_entry_cache; + +/* + * To prevent vmx_msr_entry array from crossing a page boundary, require: + * sizeof(*vmx_msrs.vmx_msr_entry.val) to be a power of two. This is guaranteed + * through compile-time asserts that: + * - NR_AUTOLOAD_MSRS * sizeof(struct vmx_msr_entry) is a power of two + * - NR_AUTOLOAD_MSRS * sizeof(struct vmx_msr_entry) <= PAGE_SIZE + * - The allocation of vmx_msrs.vmx_msr_entry.val is aligned to its size. + */ +#define CHECK_POWER_OF_TWO(val) \ + BUILD_BUG_ON_MSG(!((val) && !((val) & ((val) - 1))), \ + #val " is not a power of two.") +#define CHECK_INTRA_PAGE(val) do { \ + CHECK_POWER_OF_TWO(val); \ + BUILD_BUG_ON(!(val <= PAGE_SIZE)); \ + } while (0) struct vcpu_vmx { struct kvm_vcpu vcpu; @@ -11489,6 +11506,19 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) if (!vmx) return ERR_PTR(-ENOMEM); + vmx->msr_autoload.guest.val = + kmem_cache_zalloc(vmx_msr_entry_cache, GFP_KERNEL); + if (!vmx->msr_autoload.guest.val) { + err = -ENOMEM; + goto free_vmx; + } + vmx->msr_autoload.host.val = + kmem_cache_zalloc(vmx_msr_entry_cache, GFP_KERNEL); + if (!vmx->msr_autoload.host.val) { + err = -ENOMEM; + goto free_msr_autoload_guest; + } + vmx->vpid = allocate_vpid(); err = kvm_vcpu_init(&vmx->vcpu, kvm, id); @@ -11576,6 +11606,10 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) kvm_vcpu_uninit(&vmx->vcpu); free_vcpu: free_vpid(vmx->vpid); + kmem_cache_free(vmx_msr_entry_cache, vmx->msr_autoload.host.val); +free_msr_autoload_guest: + kmem_cache_free(vmx_msr_entry_cache, vmx->msr_autoload.guest.val); +free_vmx: kmem_cache_free(kvm_vcpu_cache, vmx); return ERR_PTR(err); } @@ -15153,6 +15187,10 @@ module_exit(vmx_exit); static int __init vmx_init(void) { int r; + size_t vmx_msr_entry_size = + sizeof(struct vmx_msr_entry) * NR_AUTOLOAD_MSRS; + + CHECK_INTRA_PAGE(vmx_msr_entry_size); #if IS_ENABLED(CONFIG_HYPERV) /* @@ -15184,9 +15222,21 @@ static int __init vmx_init(void) #endif r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx), - __alignof__(struct vcpu_vmx), THIS_MODULE); + __alignof__(struct vcpu_vmx), THIS_MODULE); if (r) return r; + /* + * A vmx_msr_entry array resides exclusively within the kernel. Thus, + * use kmem_cache_create_usercopy(), with the usersize argument set to + * ZERO, to blacklist copying vmx_msr_entry to/from user space. + */ + vmx_msr_entry_cache = + kmem_cache_create_usercopy("vmx_msr_entry", vmx_msr_entry_size, + vmx_msr_entry_size, SLAB_ACCOUNT, 0, 0, NULL); + if (!vmx_msr_entry_cache) { + r = -ENOMEM; + goto out; + } /* * Must be called after kvm_init() so enable_ept is properly set @@ -15210,5 +15260,8 @@ static int __init vmx_init(void) vmx_check_vmcs12_offsets(); return 0; +out: + kvm_exit(); + return r; } module_init(vmx_init);