From patchwork Fri May 13 20:28:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12849420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A100C433F5 for ; Fri, 13 May 2022 20:29:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1384321AbiEMU3N (ORCPT ); Fri, 13 May 2022 16:29:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1384304AbiEMU3B (ORCPT ); Fri, 13 May 2022 16:29:01 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF65F6830D for ; Fri, 13 May 2022 13:28:49 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id s2-20020a17090302c200b00158ea215fa2so4865236plk.3 for ; Fri, 13 May 2022 13:28:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CwrcgeCnZaI9Kb99PpD8if7FCspqnLoqOT2kQ5Xs/+4=; b=nl0oBe0q5H9KBq184P1+NocGitZ1+UG63V+ABA1KS0jlqspMxpp2m/2LNkaib25Qv3 eB4OIPIYG/G7QdE2pLrRij6GHtqqaFJCSiE74IeKzA68oJf7Xcd5zlNyPOLHNIUOOOVa 7snmpA1PtYy/LJjQVUQlFypuw0GI0PXCObO75p8W+ZygU7Jm4DOgR8XlF5q+AktB2ykx cui8KqO+TSFtGaYuQJ+xugrFueSbdqXBWmCmfcMwgTK3cYlx00Yk7+jtiV4wi0Rxr+ao RGUK2X+PJP4r7DTRRVsg9XYumQkdkroMx5rfObOsHbA9LikUDBqHJnSrrhKTfLxC96RJ F9GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CwrcgeCnZaI9Kb99PpD8if7FCspqnLoqOT2kQ5Xs/+4=; b=EvpeGj/Jn4j9+q4A7xFQ1wvFYZtLSKXim97+HWZ0gXTA69cDFv6pCzGRvMNhN885Lg Ns2f+ZaDivooLGDski9jdhuDAhcBtKrSbBRsYUaHn5uZEOWRx2f6C3wkSN3aLHqaV1F9 dSoVb0Vm0kmWxio5eL8KJYhXbphQNT7qAzyFWrjwZOtzHDgvQne/aLb7Jzh2FMzgxFQ5 OdGWYMslw4V3LxnPpB0GXH4uAOrksUvHcfedHJKtD+U5W4YchTI7JheJ0nM59Lu8nYYF XKW83KQzayzjzhdf+M8CQyPaF0VP7u++YtkS6StDd1Q4A2JyVhf89XdBTYpzjQghxQfJ X0Hg== X-Gm-Message-State: AOAM532JYikUSt58OSSr+b2/daFw4l3CNSqi7QGO9HRtlAzVwI33ZZMp j82b1E7qHn4X2NzMPDSQd9oGTL7lY+VEOQ== X-Google-Smtp-Source: ABdhPJxMTXDF5w4ikRkcssEqC/dmQGiMJc+viHbpMOvq3rp+5KUSJr/bsT7WG99GXpeSz+Vwo7UGSaIWMHgNJA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:e510:b0:1d9:ee23:9fa1 with SMTP id t16-20020a17090ae51000b001d9ee239fa1mr250711pjy.0.1652473727619; Fri, 13 May 2022 13:28:47 -0700 (PDT) Date: Fri, 13 May 2022 20:28:08 +0000 In-Reply-To: <20220513202819.829591-1-dmatlack@google.com> Message-Id: <20220513202819.829591-11-dmatlack@google.com> Mime-Version: 1.0 References: <20220513202819.829591-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v5 10/21] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , Lai Jiangshan , David Matlack Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org The vcpu pointer in kvm_mmu_alloc_shadow_page() is only used to get the kvm pointer. So drop the vcpu pointer and just pass in the kvm pointer. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 44431c0b797f..9cc73c3453c3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2082,7 +2082,7 @@ struct shadow_page_caches { struct kvm_mmu_memory_cache *gfn_array_cache; }; -static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, +static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, struct shadow_page_caches *caches, gfn_t gfn, struct hlist_head *sp_list, @@ -2102,15 +2102,15 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, * depends on valid pages being added to the head of the list. See * comments in kvm_zap_obsolete_pages(). */ - sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; - list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); - kvm_mod_used_mmu_pages(vcpu->kvm, +1); + sp->mmu_valid_gen = kvm->arch.mmu_valid_gen; + list_add(&sp->link, &kvm->arch.active_mmu_pages); + kvm_mod_used_mmu_pages(kvm, +1); sp->gfn = gfn; sp->role = role; hlist_add_head(&sp->hash_link, sp_list); if (sp_has_gptes(sp)) - account_shadowed(vcpu->kvm, sp); + account_shadowed(kvm, sp); return sp; } @@ -2129,7 +2129,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, sp = kvm_mmu_find_shadow_page(vcpu, gfn, sp_list, role); if (!sp) { created = true; - sp = kvm_mmu_alloc_shadow_page(vcpu, caches, gfn, sp_list, role); + sp = kvm_mmu_alloc_shadow_page(vcpu->kvm, caches, gfn, sp_list, role); } trace_kvm_mmu_get_page(sp, created);