From patchwork Fri Jul 3 02:35:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11640685 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D9D7739 for ; Fri, 3 Jul 2020 02:38:41 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 45D7820A8B for ; Fri, 3 Jul 2020 02:38:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="BuPKhocb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45D7820A8B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=wl/LZzpurLkGd/sc96pcMPPf43r8q4ppgQ9dg34hTlo=; b=BuPKhocbtVer+tUkaS5hE5BIu kNyu7gmR5bdQIOxUOdbTERaskHw4NeDlvu6yQ7Pz0n2YItaniu2k5MOuPBOiYfYo6Ly1dTRlD6oAY 7S3BhM915YliF9fDkAwlvsX01k3o4IhddJpBufCiJBkqFQmMfNbiygIKQHk/Q+Ae+alkgDJoJl+li 9/0lKHRwNUrOCc/TBE4M/hkETG+KWACQTAOM7jUdwMlhlvSgJaV3CFe5Z7rDmXMm/0kKFdZFHy4im DtW80Aj0gbFHtVVhuDPmba7hpSpYMQEuGWv3VW7A0+sP0kKgf41xV6N1Z7Io//XIsucBdiqYf9BQJ jGEk4qK4Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jrBZG-0000d8-78; Fri, 03 Jul 2020 02:36:46 +0000 Received: from mga18.intel.com ([134.134.136.126]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jrBYk-0000PO-Li for linux-arm-kernel@lists.infradead.org; Fri, 03 Jul 2020 02:36:16 +0000 IronPort-SDR: x9oMMEtOrhc6Ca16FoNpHfcmDFNI+xEpMgo/r8LycGs1P1s3VRYByPc2wPuw38PoLMQQFMkB+G RusyK/499Cww== X-IronPort-AV: E=McAfee;i="6000,8403,9670"; a="134534689" X-IronPort-AV: E=Sophos;i="5.75,306,1589266800"; d="scan'208";a="134534689" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2020 19:36:05 -0700 IronPort-SDR: g2UCCogMQfj7zhgST9jkZL/sstg2Xwrvyfbrz3kaZ7SOtOrK7WqsAVzLkvQJxdIHUq8vcItg6s rEwFp7CX9E7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,306,1589266800"; d="scan'208";a="278295744" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by orsmga003.jf.intel.com with ESMTP; 02 Jul 2020 19:36:05 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Subject: [PATCH v3 09/21] KVM: x86/mmu: Separate the memory caches for shadow pages and gfn arrays Date: Thu, 2 Jul 2020 19:35:33 -0700 Message-Id: <20200703023545.8771-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200703023545.8771-1-sean.j.christopherson@intel.com> References: <20200703023545.8771-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200702_223615_024560_A8C99A35 X-CRM114-Status: GOOD ( 12.46 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [134.134.136.126 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Junaid Shahid , Christoffer Dall , Wanpeng Li , kvm@vger.kernel.org, Suzuki K Poulose , Joerg Roedel , Peter Shier , linux-mips@vger.kernel.org, Sean Christopherson , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Ben Gardon , Vitaly Kuznetsov , Peter Feiner , kvmarm@lists.cs.columbia.edu, Julien Thierry , Jim Mattson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Use separate caches for allocating shadow pages versus gfn arrays. This sets the stage for specifying __GFP_ZERO when allocating shadow pages without incurring extra cost for gfn arrays. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/mmu/mmu.c | 15 ++++++++++----- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 71bc32e00d7e..b71a4e77f65a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -636,7 +636,8 @@ struct kvm_vcpu_arch { struct kvm_mmu *walk_mmu; struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_page_cache; + struct kvm_mmu_memory_cache mmu_shadow_page_cache; + struct kvm_mmu_memory_cache mmu_gfn_array_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cf02ad93c249..8e1b55d8a728 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1108,8 +1108,12 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, - 2 * PT64_ROOT_MAX_LEVEL); + r = mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, + PT64_ROOT_MAX_LEVEL); + if (r) + return r; + r = mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + PT64_ROOT_MAX_LEVEL); if (r) return r; return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, @@ -1119,7 +1123,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); - mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); + mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -2096,9 +2101,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct struct kvm_mmu_page *sp; sp = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); + sp->spt = mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); if (!direct) - sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); + sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); /*