From patchwork Thu Feb 3 01:00:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12733669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9D0FC433EF for ; Thu, 3 Feb 2022 01:01:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348648AbiBCBBG (ORCPT ); Wed, 2 Feb 2022 20:01:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348649AbiBCBBF (ORCPT ); Wed, 2 Feb 2022 20:01:05 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6562C061714 for ; Wed, 2 Feb 2022 17:01:04 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id mn21-20020a17090b189500b001b4fa60efcbso5614087pjb.2 for ; Wed, 02 Feb 2022 17:01:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uMSFpNCTdcqnLaFYij7d569BarSs+O4ODWByz0fWKks=; b=sQV4+lBWGik150D0iVdQwCAhV9Ax5GcoH94Lc5K6m8Swqjgx3yWxtfUvHS2M6AFQYU F8xSfFFBLVlONnO+7Qz0rm/eKMWE8JO03Fz+0M0f5sjyF7ghA+ssNnKNib+Jqb649Pxc gfYJhlfpGSHkUbH71LNbU9Tk1zdR6FQYuqqUD/vzFedrZkzBxKT4BkEmZrSQp9LaW1n/ j6ZJMinRZDLhlzGUbQ/qv7ItCgCp+E0TB3gbc0YRMozOo6yzwOimRtH4pMvEOqvbnQd6 EVEusvMzGPM7IaHlDrW4Uv3NqZkBda/hohNjixTZY4O4qZRnDvb4fnvR3AxeGgddLsav 54uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uMSFpNCTdcqnLaFYij7d569BarSs+O4ODWByz0fWKks=; b=PIBfHUN6SUv7122vlWcg0fmHTH2CFT9/m6kFMlXSvPsu9e67FWn5T4Qmx/aVaKlaBQ eM9jhuJ/z1jh9HWSYR1ln5e9BG0FHSGzfJ1S6MNrESsNqIX/GQ+cXkZtr192eqzUQ/Gs A+DVhOBQKRGX82jITnLKYMWRZQpHq/sU66lUFPSF+SbjgC927EkRzjss2qvHcb2WuE1+ 2EBPWOq02Tofxj3VEnbL/B3rM0Rh6zwokjo/r5cNBMaLdInZu4+gwrtP1gRnxzZr6T1/ ES6ZrledM/IAKUAP0OAt3OHLJ9yyvz+pZiLk0EYgZ9B0VPC+9wVzBNqAwRMPd7SZHJiw Ce5w== X-Gm-Message-State: AOAM533w8/KsnlaxMp+nab3AkgWGukoAllj6igXBcqTr6sdxXQ45xUZd 3RB0dmRrLPL6EGAStBExVvbhLfKcunGF/Q== X-Google-Smtp-Source: ABdhPJwmpSkcO+hN9r2FueAzu+IBx4Nf88wTLol8M7YPhENUgKTOsadAbdl7wE6KOQZhVChgmsARZpqpBbEq1Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4d82:: with SMTP id oj2mr1187727pjb.1.1643850063939; Wed, 02 Feb 2022 17:01:03 -0800 (PST) Date: Thu, 3 Feb 2022 01:00:30 +0000 In-Reply-To: <20220203010051.2813563-1-dmatlack@google.com> Message-Id: <20220203010051.2813563-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220203010051.2813563-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.0.rc2.247.g8bbb082509-goog Subject: [PATCH 02/23] KVM: x86/mmu: Derive shadow MMU page role from parent From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , leksandar Markovic , Sean Christopherson , Vitaly Kuznetsov , Peter Xu , Wanpeng Li , Jim Mattson , Joerg Roedel , Peter Feiner , Andrew Jones , maciej.szmigiero@oracle.com, kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Instead of computing the shadow page role from scratch for every new page, we can derive most of the information from the parent shadow page. This avoids redundant calculations such as the quadrant, and reduces the number of parameters to kvm_mmu_get_page(). Preemptivel split out the role calculation to a separate function for use in a following commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 71 ++++++++++++++++++++++------------ arch/x86/kvm/mmu/paging_tmpl.h | 9 +++-- 2 files changed, 51 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6ca38277f2ab..fc9a4d9c0ddd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2045,30 +2045,14 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, - gfn_t gfn, - gva_t gaddr, - unsigned level, - int direct, - unsigned int access) +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) { - union kvm_mmu_page_role role; struct hlist_head *sp_list; - unsigned quadrant; struct kvm_mmu_page *sp; int collisions = 0; LIST_HEAD(invalid_list); - role = vcpu->arch.mmu->mmu_role.base; - role.level = level; - role.direct = direct; - role.access = access; - if (role.has_4_byte_gpte) { - quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; - role.quadrant = quadrant; - } - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; for_each_valid_sp(vcpu->kvm, sp, sp_list) { if (sp->gfn != gfn) { @@ -2086,7 +2070,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ - if (level > PG_LEVEL_4K && sp->unsync) + if (role.level > PG_LEVEL_4K && sp->unsync) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); continue; @@ -2125,14 +2109,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, direct); + sp = kvm_mmu_alloc_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; hlist_add_head(&sp->hash_link, sp_list); - if (!direct) { + if (!role.direct) { account_shadowed(vcpu->kvm, sp); - if (level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) + if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } trace_kvm_mmu_get_page(sp, true); @@ -2144,6 +2128,31 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, return sp; } +static union kvm_mmu_page_role kvm_mmu_child_role(struct kvm_mmu_page *parent_sp, + bool direct, u32 access) +{ + union kvm_mmu_page_role role; + + role = parent_sp->role; + role.level--; + role.access = access; + role.direct = direct; + + return role; +} + +static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, + u64 *sptep, gfn_t gfn, + bool direct, u32 access) +{ + struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); + union kvm_mmu_page_role role; + + role = kvm_mmu_child_role(parent_sp, direct, access); + + return kvm_mmu_get_page(vcpu, gfn, role); +} + static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, struct kvm_vcpu *vcpu, hpa_t root, u64 addr) @@ -2942,8 +2951,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_shadow_present_pte(*it.sptep)) continue; - sp = kvm_mmu_get_page(vcpu, base_gfn, it.addr, - it.level - 1, true, ACC_ALL); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL); link_shadow_page(vcpu, it.sptep, sp); if (fault->is_tdp && fault->huge_page_disallowed && @@ -3325,9 +3333,22 @@ static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn) static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, u8 level, bool direct) { + union kvm_mmu_page_role role; struct kvm_mmu_page *sp; + unsigned int quadrant; + + role = vcpu->arch.mmu->mmu_role.base; + role.level = level; + role.direct = direct; + role.access = ACC_ALL; + + if (role.has_4_byte_gpte) { + quadrant = gva >> (PAGE_SHIFT + (PT64_PT_BITS * level)); + quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; + role.quadrant = quadrant; + } - sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL); + sp = kvm_mmu_get_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5b5bdac97c7b..f93d4423a067 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -683,8 +683,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (!is_shadow_present_pte(*it.sptep)) { table_gfn = gw->table_gfn[it.level - 2]; access = gw->pt_access[it.level - 2]; - sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr, - it.level-1, false, access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn, + false, access); + /* * We must synchronize the pagetable before linking it * because the guest doesn't need to flush tlb when @@ -740,8 +741,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, drop_large_spte(vcpu, it.sptep); if (!is_shadow_present_pte(*it.sptep)) { - sp = kvm_mmu_get_page(vcpu, base_gfn, fault->addr, - it.level - 1, true, direct_access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, + true, direct_access); link_shadow_page(vcpu, it.sptep, sp); if (fault->huge_page_disallowed && fault->req_level >= it.level)