From patchwork Mon May 16 23:21:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12851719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FCF3C433EF for ; Mon, 16 May 2022 23:22:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350343AbiEPXWD (ORCPT ); Mon, 16 May 2022 19:22:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350327AbiEPXV4 (ORCPT ); Mon, 16 May 2022 19:21:56 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01D00427F9 for ; Mon, 16 May 2022 16:21:55 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id om14-20020a17090b3a8e00b001df42f1cbaaso405135pjb.5 for ; Mon, 16 May 2022 16:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=va6jnjP+dVTFd61IW1SLfHEpANInDsPUK47j5zyNesg=; b=m6awzy0zf+d/TUzmk59URkI0NakeAaQoC0dqNJsXsPjGxxJtt/Sqdt7/K2AdHHEYLZ yYm81xVPlvVJ2J90+CnW/ELKKKlwsE3xnAGaispBdZcfy5bjkRM13XbzlTi0TCvwVMto P2Ca04/1dCr9DVYw3S7Rk5sW6i2kzAm8yhzyTCYYjbq2PzpfzRbSN7jepHlhAh/4c+c2 G/AoduRoCiyhWPIQmYXlToMqmgvtS5znsJKLvNlYI3uo8qzenxkvw041K7UdW+7qB1OM cGKkfrAPC0utjsEhJIWOxEhgRG0+7QINlRxNaGWnhMBd2//U0SSAMwu/c6BsRMMg0Ci1 FKeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=va6jnjP+dVTFd61IW1SLfHEpANInDsPUK47j5zyNesg=; b=PqFbFiD3hxBpx8uv3nDG0VTHnCR1TDHwYLV5hqOZFZzASImv1eNEYaZFNZl9DvhOBk +7AiUwH74KTNuJd/tAuOHhAMagSyDSz6703L6gZCFU0R7iV7V6sFas7HlmRQYdUR7qzu 1qWiduw4mp8iQbP49S/Tp7OR4t4Ol3eUW8TrEUUt27yAF0YI8Qh3CxQ5tgT0yzfQqfC6 Cumm94QOay71OJug9pRDagqHlQIPosRtDY9Ojhs9vACQSUtN3GZa4wGEUHJsSMKJLPLx EWPqy7U5S9z8rbhxozbX5wyyDQeRGJj/ehvZbNeE8dapJO5EktNmpUKq3SwWq6rJzlxd cxDQ== X-Gm-Message-State: AOAM532CaD+ZXxEC6Vs2LS2kGMo+ATVpo3iEqa1FVGO4taVW8hfcJULR cf1F9enKZ+LLXm8q0XUKw9IKXiXV3/g4GQ== X-Google-Smtp-Source: ABdhPJxVTNcRlZzKf8IiwYxUecqvC2n1d2hOZ23A/8uLFP7ir3PZoPX3x9WU00kz/JKXyDTTRH4+1G0nIF92pQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:3708:b0:1df:56ac:65c6 with SMTP id mg8-20020a17090b370800b001df56ac65c6mr7512962pjb.23.1652743314388; Mon, 16 May 2022 16:21:54 -0700 (PDT) Date: Mon, 16 May 2022 23:21:24 +0000 In-Reply-To: <20220516232138.1783324-1-dmatlack@google.com> Message-Id: <20220516232138.1783324-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220516232138.1783324-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v6 08/22] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , Lai Jiangshan , David Matlack Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Rename 2 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. It also aligns these functions with the naming scheme for kvm_mmu_find_shadow_page() and kvm_mmu_alloc_shadow_page(). Prefer "shadow_page" over the shorter "sp" since these are core functions and the line lengths aren't terrible. No functional change intended. Reviewed-by: Sean Christopherson Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0b14097f8771..d342fcc5813d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1652,7 +1652,7 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_page(struct kvm_mmu_page *sp) +static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); @@ -2107,8 +2107,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) { struct hlist_head *sp_list; struct kvm_mmu_page *sp; @@ -2172,7 +2173,7 @@ static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role; role = kvm_mmu_child_role(sptep, direct, access); - return kvm_mmu_get_page(vcpu, gfn, role); + return kvm_mmu_get_shadow_page(vcpu, gfn, role); } static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, @@ -2448,7 +2449,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); - kvm_mmu_free_page(sp); + kvm_mmu_free_shadow_page(sp); } } @@ -3432,7 +3433,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte); WARN_ON_ONCE(role.direct && role.has_4_byte_gpte); - sp = kvm_mmu_get_page(vcpu, gfn, role); + sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt);