From patchwork Fri May 13 20:28:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12849416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5AACC433EF for ; Fri, 13 May 2022 20:28:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1384206AbiEMU2q (ORCPT ); Fri, 13 May 2022 16:28:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1384257AbiEMU2o (ORCPT ); Fri, 13 May 2022 16:28:44 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 225FB1275E for ; Fri, 13 May 2022 13:28:42 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id l72-20020a63914b000000b003c1ac4355f5so4672686pge.4 for ; Fri, 13 May 2022 13:28:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fScK4lvuJyuU42NFoLm0OON5ITyyiDkyHzjcV83EFqI=; b=aL+4n5FgxmVKFh7F5Bt67HqG+J2qQUhuBh3wos9f+VD7hGVD81zzElxq/koyhZFgDZ C0tKwovEEPxpqfsZ61X3Eb/sGvbI+f+L0faf+zfVZ+HqpY2CIEtgtr1tIz7WeO1FJk1p I+NO4ghx3JpILB8E/TcsTvWw5Jyx9Xt92B5NPZw5NLQlU/UT+J0zuDWPIVfAvVwjUD2p nomsi78cdRnFpAHBpLmhFmy+0eY23fyI9MB5B4cbBoNa1zgmT8DLuQs/gsgEhnPLrlTB DtqCfD04UG7dmBzDDwFFmG+s0ZJi+Bm9XPs35xhQEsXQh9bjWuZE7sV4YIyj0/aVQyDe hKmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fScK4lvuJyuU42NFoLm0OON5ITyyiDkyHzjcV83EFqI=; b=TXN2maIsMF+Ojx7b50wnqlM5z7IUryq0DTeUEZM9i2DCGFnu0r64eYEqsVm8gf8moQ 3wCvKiWiNCBI1g4OANuceEteM2YyxXfrvpmwVl9OY8I26TYPfjMg+/E36aQRXBqybFWY krhicfXKRGhQh4p8XIXukfljzSIFymu3N51Fq5ejRL+x9j/ocZ5E52FY3ZYhaafmDOan C0SY8lB7Oufwi4+fqdcKJAkltWwrPI18qPO5R/qwPTTZu7eAw+F+rSra68oCZdL+aUGp bGXixXeWCW7lo7wUJPB8Wf1Me83d5eLjMMOV0W9Br3azV7WJXrIQdGZDXHB2xNDHgZzl Mggw== X-Gm-Message-State: AOAM531XRabqOVPMNipkACL1rXsj5pJDwqqN+oszgB3y6kQMjhCHfO2d QCNtnxgwpzKjzn1azPY7XZX7dgnZfqOmFA== X-Google-Smtp-Source: ABdhPJxTk2/Hk3H67iJ+ptMF1GJpY+MWXmC2trsgpmoBpCRkxirXB3r44DMKvAFZqafXEXOdnnbKUl0VcVfvLQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:8d83:b0:1dd:258c:7c55 with SMTP id d3-20020a17090a8d8300b001dd258c7c55mr250785pjo.1.1652473720072; Fri, 13 May 2022 13:28:40 -0700 (PDT) Date: Fri, 13 May 2022 20:28:04 +0000 In-Reply-To: <20220513202819.829591-1-dmatlack@google.com> Message-Id: <20220513202819.829591-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220513202819.829591-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v5 06/21] KVM: x86/mmu: Consolidate shadow page allocation and initialization From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , Lai Jiangshan , David Matlack Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Consolidate kvm_mmu_alloc_page() and kvm_mmu_alloc_shadow_page() under the latter so that all shadow page allocation and initialization happens in one place. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++---------------------- 1 file changed, 17 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ce334eaeef22..fd749748b280 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1690,27 +1690,6 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) -{ - struct kvm_mmu_page *sp; - - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); - if (!direct) - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - /* - * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() - * depends on valid pages being added to the head of the list. See - * comments in kvm_zap_obsolete_pages(). - */ - sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; - list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); - kvm_mod_used_mmu_pages(vcpu->kvm, +1); - return sp; -} - static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { @@ -2098,7 +2077,23 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, struct hlist_head *sp_list, union kvm_mmu_page_role role) { - struct kvm_mmu_page *sp = kvm_mmu_alloc_page(vcpu, role.direct); + struct kvm_mmu_page *sp; + + sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); + sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + if (!role.direct) + sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + + /* + * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() + * depends on valid pages being added to the head of the list. See + * comments in kvm_zap_obsolete_pages(). + */ + sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; + list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); + kvm_mod_used_mmu_pages(vcpu->kvm, +1); sp->gfn = gfn; sp->role = role;