From patchwork Mon Jun 22 20:08:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619033 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 341FC138C for ; Mon, 22 Jun 2020 20:13:14 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E8EF2073E for ; Mon, 22 Jun 2020 20:13:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Tg4mkZ5T" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E8EF2073E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4TSYmQSKcPhbXK94kx2nlkyVVcgguJ6XFmtyGkNQ1bU=; b=Tg4mkZ5TW7dO+Uli0NkXJx6qA EFHWSnLvkCD8235PIh4tRQMhVqoniOGkAFvI8b67ornkh58Xf1Vg3TXttAoEP4or8hEd16aoWEdYg T+bcaL4JF4v+eVRXlTsAle4OH2fF9RhMGbumk5bpjFpuWVSWJmdNlWPfmiifG5Sv0zto5TCla5evf 681eJ22CEC2Gfbe85O0VuTxUsF+cfxLZhYcXPnMPocChaRYM6Gz4x7J76vTIGVVtZtQmzQl3/sg84 HX3qx3EhW/KP4dyTO04mFrt/mbJsJqYBFV4B5Aq6JDMP5qlKkpJRQB+lMcC4t2aX0ykYMmhS46X15 k9YCn/5Nw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnSmi-0006k1-K2; Mon, 22 Jun 2020 20:11:16 +0000 Received: from mga12.intel.com ([192.55.52.136]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnSlv-0006Gp-2r for linux-arm-kernel@lists.infradead.org; Mon, 22 Jun 2020 20:10:28 +0000 IronPort-SDR: rCmQh1f7M2uDrET6LYgfEfo+EyR4jN0IBZGWYyAntpBnvruxp6naNCSA1I8bs131zR45qz40rs 1ydP9vyp2+oQ== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="123527752" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="123527752" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 13:09:16 -0700 IronPort-SDR: Y4mN49RL3/bOGPxDXXvL4A7Bdez2DcpeOt1FfbrWwwtuYdLxL80S+C02W0LXXaJRtK4x6m99Gr MiwZAjctZGZw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="318877097" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jun 2020 13:09:16 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Subject: [PATCH v2 12/21] KVM: x86/mmu: Skip filling the gfn cache for guaranteed direct MMU topups Date: Mon, 22 Jun 2020 13:08:13 -0700 Message-Id: <20200622200822.4426-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622200822.4426-1-sean.j.christopherson@intel.com> References: <20200622200822.4426-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [192.55.52.136 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Junaid Shahid , Christoffer Dall , Wanpeng Li , kvm@vger.kernel.org, Suzuki K Poulose , Joerg Roedel , Peter Shier , linux-mips@vger.kernel.org, Sean Christopherson , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Ben Gardon , Vitaly Kuznetsov , Peter Feiner , kvmarm@lists.cs.columbia.edu, Julien Thierry , Jim Mattson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Don't bother filling the gfn array cache when the caller is a fully direct MMU, i.e. won't need a gfn array for shadow pages. Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++-------- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- 2 files changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8f8eebf67df..8d66cf558f1b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1101,7 +1101,7 @@ static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) } } -static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) +static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) { int r; @@ -1114,10 +1114,12 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) PT64_ROOT_MAX_LEVEL); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, - PT64_ROOT_MAX_LEVEL); - if (r) - return r; + if (maybe_indirect) { + r = mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + PT64_ROOT_MAX_LEVEL); + if (r) + return r; + } return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, PT64_ROOT_MAX_LEVEL); } @@ -4107,7 +4109,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, if (fast_page_fault(vcpu, gpa, error_code)) return RET_PF_RETRY; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memory_caches(vcpu, false); if (r) return r; @@ -5147,7 +5149,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) { int r; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->direct_map); if (r) goto out; r = mmu_alloc_roots(vcpu); @@ -5341,7 +5343,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, * or not since pte prefetch is skiped if it does not have * enough objects in the cache. */ - mmu_topup_memory_caches(vcpu); + mmu_topup_memory_caches(vcpu, true); spin_lock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 3de32122f601..ac39710d0594 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -818,7 +818,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, return RET_PF_EMULATE; } - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memory_caches(vcpu, true); if (r) return r; @@ -905,7 +905,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) * No need to check return value here, rmap_can_add() can * help us to skip pte prefetch later. */ - mmu_topup_memory_caches(vcpu); + mmu_topup_memory_caches(vcpu, true); if (!VALID_PAGE(root_hpa)) { WARN_ON(1);