From patchwork Wed Sep 21 17:35:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A732ECAAD8 for ; Wed, 21 Sep 2022 17:35:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230171AbiIURf5 (ORCPT ); Wed, 21 Sep 2022 13:35:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230000AbiIURfz (ORCPT ); Wed, 21 Sep 2022 13:35:55 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6B19A2605 for ; Wed, 21 Sep 2022 10:35:53 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-34d188806a8so57429147b3.19 for ; Wed, 21 Sep 2022 10:35:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=fBSjtPz2LhA6G6iQtemxQX3qx5EX47fVB+7c0Daoag8=; b=C6m5qh/OeKq6tFfxY9JBwjeCK96ZsyfT2LZwhhn21WuHPk5vM6hg7VxpLPv8abxBb4 fvoLS2WS1WhHna60cZyL/Gczk4X16WVAwJU3CdlZwUnDOLBaNU/kJxK334pzBtyXiHUk yrloA3TtuEhLsdQdDcgPp5CNELkllPWxcBcq9C1uPGVCg+nTuEsrCxtrdkFxQdP51OIh mrTywO6KIeFhcVuRJg0RFYnsj6AWhcXkS7z6Js9MRbc0aNRRE+CJps0symAJZ/BR2FRh wR7JIx2nlGvD9Sg/0qrQ2vxBlUbiHHt9mLKX9odZ1eQcmrfUnenuwjT/WgYp10XVuycZ 3G6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=fBSjtPz2LhA6G6iQtemxQX3qx5EX47fVB+7c0Daoag8=; b=yj5sUMGy2Nvc2ch8KTSSWhW4QSXRdJAL0WuI0ojhr8/JxZoAVUP9RyZRQjaASZve7H ouv0FaSiHoXJsRA+TLSJjG0b7kDSJlCiL+kwhJFgii7dSa7q3QECKpQRYWLMGhWQwUP+ RHN5df+4v2BMElNcUbeEmIUZR2uKGLb/3vFh689sxgDbx/F3B2J1vmFc/EKMkEmH/zzp iBIoPdP9tFTrWHpD6FKR3XjJdTAX2xsA5G4rFTE5hPBHxP3DjTcu2wF3A4ieBBBUr8ji udaVPnjt6c4aY4l4hFl/81sb6In5DQzmFjwMRZtG07Nf1vlTaDMhoTfR43WsrZI8pTMk 6Scg== X-Gm-Message-State: ACrzQf1BBSChqi+t8QDBsh2SMdDQlnudl6s6t4FHVdCpZ82hGC+TDy8C 3pyP3laRAbPHHVSzD98dsyVVLirxWh+lTw== X-Google-Smtp-Source: AMsMyM68TSTWHC5mo6YHIY739ChaVvZF8n378uIjyIerQAUS6MUXL1IN2xTx/j25/Hv9UEPcb7c/x50Ckq3X+w== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:4941:0:b0:6b3:b7ca:67de with SMTP id w62-20020a254941000000b006b3b7ca67demr17815118yba.347.1663781753017; Wed, 21 Sep 2022 10:35:53 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:37 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-2-dmatlack@google.com> Subject: [PATCH v3 01/10] KVM: x86/mmu: Change tdp_mmu to a read-only parameter From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Change tdp_mmu to a read-only parameter and drop the per-vm tdp_mmu_enabled. For 32-bit KVM, make tdp_mmu_enabled a macro that is always false so that the compiler can continue omitting cals to the TDP MMU. The TDP MMU was introduced in 5.10 and has been enabled by default since 5.15. At this point there are no known functionality gaps between the TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales better with the number of vCPUs. In other words, there is no good reason to disable the TDP MMU on a live system. Purposely do not drop tdp_mmu=N support (i.e. do not force 64-bit KVM to always use the TDP MMU) since tdp_mmu=N is still used to get test coverage of KVM's shadow MMU TDP support, which is used in 32-bit KVM. Signed-off-by: David Matlack Reviewed-by: Kai Huang --- arch/x86/include/asm/kvm_host.h | 9 ------ arch/x86/kvm/mmu.h | 6 ++-- arch/x86/kvm/mmu/mmu.c | 51 ++++++++++++++++++++++----------- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++---- 4 files changed, 39 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2c96c43c313a..d76059270a43 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1262,15 +1262,6 @@ struct kvm_arch { struct task_struct *nx_lpage_recovery_thread; #ifdef CONFIG_X86_64 - /* - * Whether the TDP MMU is enabled for this VM. This contains a - * snapshot of the TDP MMU module parameter from when the VM was - * created and remains unchanged for the life of the VM. If this is - * true, TDP MMU handler functions will run for various MMU - * operations. - */ - bool tdp_mmu_enabled; - /* * List of struct kvm_mmu_pages being used as roots. * All struct kvm_mmu_pages in the list should have diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa0..168c46fd8dd1 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -230,14 +230,14 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm) } #ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } +extern bool tdp_mmu_enabled; #else -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } +#define tdp_mmu_enabled false #endif static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm); + return !tdp_mmu_enabled || kvm_shadow_root_allocated(kvm); } static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e418ef3ecfcb..ccb0b18fd194 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -98,6 +98,13 @@ module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644); */ bool tdp_enabled = false; +bool __ro_after_init tdp_mmu_allowed; + +#ifdef CONFIG_X86_64 +bool __read_mostly tdp_mmu_enabled = true; +module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444); +#endif + static int max_huge_page_level __read_mostly; static int tdp_root_level __read_mostly; static int max_tdp_level __read_mostly; @@ -1253,7 +1260,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); @@ -1286,7 +1293,7 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); @@ -1369,7 +1376,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, } } - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) write_protected |= kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn, min_level); @@ -1532,7 +1539,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); return flush; @@ -1545,7 +1552,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); return flush; @@ -1618,7 +1625,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); return young; @@ -1631,7 +1638,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); return young; @@ -3543,7 +3550,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (r < 0) goto out_unlock; - if (is_tdp_mmu_enabled(vcpu->kvm)) { + if (tdp_mmu_enabled) { root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); mmu->root.hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { @@ -5662,6 +5669,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, tdp_root_level = tdp_forced_root_level; max_tdp_level = tdp_max_root_level; +#ifdef CONFIG_X86_64 + tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled; +#endif /* * max_huge_page_level reflects KVM's MMU capabilities irrespective * of kernel support, e.g. KVM may be capable of using 1GB pages when @@ -5909,7 +5919,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * write and in the same critical section as making the reload request, * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_invalidate_all_roots(kvm); /* @@ -5934,7 +5944,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * Deferring the zap until the final reference to the root is put would * lead to use-after-free. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_zap_invalidated_roots(kvm); } @@ -6046,7 +6056,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, gfn_end, true, flush); @@ -6079,7 +6089,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level); read_unlock(&kvm->mmu_lock); @@ -6322,7 +6332,7 @@ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, u64 start, u64 end, int target_level) { - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_mmu_enabled) return; if (kvm_memslots_have_rmaps(kvm)) @@ -6343,7 +6353,7 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages; - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_mmu_enabled) return; if (kvm_memslots_have_rmaps(kvm)) { @@ -6426,7 +6436,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); @@ -6461,7 +6471,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); read_unlock(&kvm->mmu_lock); @@ -6496,7 +6506,7 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_zap_all(kvm); write_unlock(&kvm->mmu_lock); @@ -6661,6 +6671,13 @@ void __init kvm_mmu_x86_module_init(void) if (nx_huge_pages == -1) __set_nx_huge_pages(get_nx_auto_mode()); + /* + * Snapshot userspace's desire to enable the TDP MMU. Whether or not the + * TDP MMU is actually enabled is determined in kvm_configure_mmu() + * when the vendor module is loaded. + */ + tdp_mmu_allowed = tdp_mmu_enabled; + kvm_mmu_spte_module_init(); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bf2ccf9debca..e7d0f21fbbe8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -10,23 +10,18 @@ #include #include -static bool __read_mostly tdp_mmu_enabled = true; -module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); - /* Initializes the TDP MMU for the VM, if enabled. */ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; - if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) + if (!tdp_mmu_enabled) return 0; wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); if (!wq) return -ENOMEM; - /* This should not be changed for the lifetime of the VM. */ - kvm->arch.tdp_mmu_enabled = true; INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); @@ -48,7 +43,7 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!kvm->arch.tdp_mmu_enabled) + if (!tdp_mmu_enabled) return; /* Also waits for any queued work items. */ From patchwork Wed Sep 21 17:35:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC0E0C6FA8B for ; Wed, 21 Sep 2022 17:35:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230241AbiIURf6 (ORCPT ); Wed, 21 Sep 2022 13:35:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230046AbiIURf4 (ORCPT ); Wed, 21 Sep 2022 13:35:56 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43C74A2624 for ; Wed, 21 Sep 2022 10:35:55 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id z24-20020a056a001d9800b0054667d493bdso3908312pfw.0 for ; Wed, 21 Sep 2022 10:35:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=eEEdCkVgaopvOjSWP9sao5eDbk8at0E8Hhz1QfDH+j4=; b=BibxYKmIFixzbvqnID3Cidj2r4T0Dc3Rh/Eimp6aQvMs6yq2YzaR8eVvrGNkXg16Aw kFdHPxfXHva/6Tma/1dLCzPfrEyk6oTUlqWAU69WJYLrSGAaBOtEr3ngOIWQ9EsmjZly LAe3336Fpgx+NI2W2cn/KVu8ncJwHixM4LvARXc8DHxK/HycMUgbQWdkZzHnBhRrdHS/ VtwbS5fjX9RfipACimHLiojeVEprqe7xyw3SPfLnFpjDHjBoV3Gt/8p7MTt7IoDYfd9z gu8S0jCRxA14DspbpCzvM9GIMHMLb/Vj1rB4t/BLt4s4/xOiD2nsc1dBQu2d83trbbzd 9+Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=eEEdCkVgaopvOjSWP9sao5eDbk8at0E8Hhz1QfDH+j4=; b=tga/n9vKhAojg1O2Sx5PCE3qjRRM+rXev/K+6nwwcA2oMAchavFuUGbq+5hTfkJ4dj 3HWYQ5ghon6jvR6AuX5tlzLadnRevSUe+Qp5LMRSSmd7uAbTh9tGOdXbVvmOAACW6pTg Kg6aFzNCe1eCIxFlpAuys7y+a6xYgOvv7+u7LVGQR6QkJtWN/OJtxen6MeKi6Y2VSE+K qgOm43DCELXrVD5mFagsIJNwd6UynyG9sBu6UHW2tdfLvjZujcsH9BwylSPhC8WmxvKB eeHu3DPfu0HVuFnuC4CpXV+f2TQKWGZRGDYKMnM7ZyDvAzIsD7PCUCpUqKZR8Gt0EzcX DQqQ== X-Gm-Message-State: ACrzQf2FMKCQFp/tWygKO/x0l9HU1R7qXBWBTSQvpITNq37QEpuk5KOY WJyx78vFDrPAtglmn/fjay/orFtgVqX3Mg== X-Google-Smtp-Source: AMsMyM6gfUnrUU7Ss75/qLpfeXzxZ4mjMu5F9A2GIRzdniakTvplBjh+wVl/jwJVxmZw7omh8zt6wfnbD083Kg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a62:84c6:0:b0:538:3c39:5819 with SMTP id k189-20020a6284c6000000b005383c395819mr30542193pfd.4.1663781754770; Wed, 21 Sep 2022 10:35:54 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:38 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-3-dmatlack@google.com> Subject: [PATCH v3 02/10] KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move kvm_mmu_{init,uninit}_tdp_mmu() behind tdp_mmu_enabled. This makes these functions consistent with the rest of the calls into the TDP MMU from mmu.c, and which is now possible since tdp_mmu_enabled is only modified when the x86 vendor module is loaded. i.e. It will never change during the lifetime of a VM. This change also enabled removing the stub definitions for 32-bit KVM, as the compiler will just optimize the calls out like it does for all the other TDP MMU functions. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 11 +++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 6 ------ arch/x86/kvm/mmu/tdp_mmu.h | 7 +++---- 3 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ccb0b18fd194..dd261cd2ad4e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5970,9 +5970,11 @@ int kvm_mmu_init_vm(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); - r = kvm_mmu_init_tdp_mmu(kvm); - if (r < 0) - return r; + if (tdp_mmu_enabled) { + r = kvm_mmu_init_tdp_mmu(kvm); + if (r < 0) + return r; + } node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; @@ -6002,7 +6004,8 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) kvm_page_track_unregister_notifier(kvm, node); - kvm_mmu_uninit_tdp_mmu(kvm); + if (tdp_mmu_enabled) + kvm_mmu_uninit_tdp_mmu(kvm); mmu_free_vm_memory_caches(kvm); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e7d0f21fbbe8..08ab3596dfaa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -15,9 +15,6 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; - if (!tdp_mmu_enabled) - return 0; - wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); if (!wq) return -ENOMEM; @@ -43,9 +40,6 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!tdp_mmu_enabled) - return; - /* Also waits for any queued work items. */ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..9d086a103f77 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -5,6 +5,9 @@ #include +int kvm_mmu_init_tdp_mmu(struct kvm *kvm); +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) @@ -66,8 +69,6 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte); #ifdef CONFIG_X86_64 -int kvm_mmu_init_tdp_mmu(struct kvm *kvm); -void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) @@ -87,8 +88,6 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) return sp && is_tdp_mmu_page(sp) && sp->root_count; } #else -static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; } -static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } #endif From patchwork Wed Sep 21 17:35:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C06FFC6FA82 for ; Wed, 21 Sep 2022 17:36:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230289AbiIURgC (ORCPT ); Wed, 21 Sep 2022 13:36:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230236AbiIURf6 (ORCPT ); Wed, 21 Sep 2022 13:35:58 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C64DA2635 for ; Wed, 21 Sep 2022 10:35:57 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id ay1-20020a056a00300100b0053e7e97696bso3879729pfb.3 for ; Wed, 21 Sep 2022 10:35:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=ZUXH2KhTaaeE9Toa4N2Y/qisHqn5Wrtu+fWXta18cWY=; b=TXNqEriqINa5qwtUCqOSA9p0YTVUR7LsPsoyfSI8eFTUISRH5s7JcWUPAEea1J6KIO LfZJTetMyGSC8EmXisoLyqZbxJNQPnpYn8zcI0hfX6hhLOlPofEezUgjSazpWyYWcL+D i3B3g5CfnZTFQcjzrWwlbJwZhdUpHmrj9yewzHoyq5e4/DTVO0CMiSuS+OuomY675wZU 8IGDTD8ItJwH0c7kyh4SYSWX9i5dnosUfi1A1hnvBkkXG2MJA5McdIRQ/6ekt9J/BnLT 6L4rvmdbquV/1aPwOEzWCvS9DAQVhropcPHsVR+72DLsx27M6oArKuWmdSGyql4RJXpP 8w6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=ZUXH2KhTaaeE9Toa4N2Y/qisHqn5Wrtu+fWXta18cWY=; b=0Bm/2m9b63m1RNo/OYxgejj2uPUlA9STL9sv+HTDN4kH7m/WXUFJgLvV3XSAukOMpr xiXM7VbM3fZgIUpUpHPLyuSZVThr1+V5f+O/Ma9GwftsT2J80RzXrgBo3s4ld6gXvj0/ G1a4T+rOgAthYlWkIad49X+d5iwAjTtL7jx3yr0oPttTz7Fkzb7yf/YFkuZwtm+SJ8rz bmrMhE+/6KAn1ouSIi5Xo98+xmwUtwAW6IbcxGKogXqIEiGSW8HYDpuUmaRiGAsAqEKd Y2GeIYfOTe97xyD1thXJ7SHTnnCDBScmN+a5XILzdXl3HOWfNqA277Wt2hbIRFyyjp0q BLKw== X-Gm-Message-State: ACrzQf2SKsOK4uo70Om7nyVd6CpWxLG5xB91a8RcJlZk5zLNa8Xn9O+E Ksgkd+xIjtxuPX+U1tKinqirpb2GnMvREw== X-Google-Smtp-Source: AMsMyM75a8O0NE87klELHjMCON0/apfFB2x8KLgbFiEn7c1sPxlXbn0vXmY2YRJ8ZNVE6gCfX4o9VXWpIW+ueA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:10a:b0:200:2849:235f with SMTP id p10-20020a17090b010a00b002002849235fmr813728pjz.1.1663781756623; Wed, 21 Sep 2022 10:35:56 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:39 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-4-dmatlack@google.com> Subject: [PATCH v3 03/10] KVM: x86/mmu: Grab mmu_invalidate_seq in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Grab mmu_invalidate_seq in kvm_faultin_pfn() and stash it in struct kvm_page_fault. The eliminates duplicate code and reduces the amount of parameters needed for is_page_fault_stale(). Preemptively split out __kvm_faultin_pfn() to a separate function for use in subsequent commits. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 6 +----- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dd261cd2ad4e..31b835d20762 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4129,7 +4129,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; bool async; @@ -4185,12 +4185,20 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return RET_PF_CONTINUE; } +static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +{ + fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq; + smp_rmb(); + + return __kvm_faultin_pfn(vcpu, fault); +} + /* * Returns true if the page fault is stale and needs to be retried, i.e. if the * root was invalidated by a memslot update or a relevant mmu_notifier fired. */ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault, int mmu_seq) + struct kvm_page_fault *fault) { struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root.hpa); @@ -4210,14 +4218,12 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); - - unsigned long mmu_seq; int r; fault->gfn = fault->addr >> PAGE_SHIFT; @@ -4234,9 +4240,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - mmu_seq = vcpu->kvm->mmu_invalidate_seq; - smp_rmb(); - r = kvm_faultin_pfn(vcpu, fault); if (r != RET_PF_CONTINUE) return r; @@ -4252,7 +4255,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else write_lock(&vcpu->kvm->mmu_lock); - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; r = make_mmu_pages_available(vcpu); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..1c0a1e7c796d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -221,6 +221,7 @@ struct kvm_page_fault { struct kvm_memory_slot *slot; /* Outputs of kvm_faultin_pfn. */ + unsigned long mmu_seq; kvm_pfn_t pfn; hva_t hva; bool map_writable; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 39e0205e7300..98f4abce4eaf 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -791,7 +791,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault { struct guest_walker walker; int r; - unsigned long mmu_seq; bool is_self_change_mapping; pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_code); @@ -838,9 +837,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else fault->max_level = walker.level; - mmu_seq = vcpu->kvm->mmu_invalidate_seq; - smp_rmb(); - r = kvm_faultin_pfn(vcpu, fault); if (r != RET_PF_CONTINUE) return r; @@ -871,7 +867,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = RET_PF_RETRY; write_lock(&vcpu->kvm->mmu_lock); - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; r = make_mmu_pages_available(vcpu); From patchwork Wed Sep 21 17:35:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CEF5C6FA8B for ; Wed, 21 Sep 2022 17:36:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230293AbiIURgE (ORCPT ); Wed, 21 Sep 2022 13:36:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230255AbiIURgB (ORCPT ); Wed, 21 Sep 2022 13:36:01 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EC2AA284E for ; Wed, 21 Sep 2022 10:35:59 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-34d2a92912dso51660247b3.14 for ; Wed, 21 Sep 2022 10:35:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=u9Vd7tEftAXGnn44yUXAC5/iP4NUEzzbRwz1C9sd3AM=; b=Dq0oBkbV0SXCGqzsyxWdldJ7YHX5uJaKIjTNpE0p5AL79Lw7oi1OVU5qi7ftVs1xlI wVxpyBHimWgbY6934ayFcoKOmgbzmGoXQ0TG1pXAziPIm/PCstD20Xh3grqKMa6YU92G 6j2TxXfOHT+/bOcpymSF9IZg4vpcXd/dl/u3Xcizf7gEHpiqvAiCX1NjVTnBSYvfzFFw b53fSf7Kd0B38dB+Y1trsRHOwL5CJSfJ8rQgy1d/wDd6b1FxIHYbSGLPv30zxqaFVV47 eAjS/H81p/lyYsY2DIsl6GmoQGV6aZ/vRe6Re1Xl8dwU39IYhW9Lb6Nh6mL3+ev78FXU ygkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=u9Vd7tEftAXGnn44yUXAC5/iP4NUEzzbRwz1C9sd3AM=; b=36yHKzQBWO12T8nxCYerMaXfVPsOx9yTM7TVpPP04n2TlT4ZZ8IFAOzYzbQscJmibi /KzdMT7PI4pmRfxJiUFeV4AGyFLYlfxkYWCCOql7Zv82HXnDo6598uXBoCbXurZqDyUC w+jkJLONTDm7IQqLwrk8NVB8IzKx0fY8KAOh5YcCOjBdgHr361Wi7fXFo0Xa0gSzmi4D btEC5LSIcAsoLBPyYCaAsdJrkcQYVS+PASkmNpoDZVbEDnCPXzGDkXH0mqY8vjPmIOsk AK0AICFV3z5jcRFOb1IEj01+3RTeapf27SPoxmcNFT2Sgos8Bh6L8whXjEccKdMXy97N y9eA== X-Gm-Message-State: ACrzQf3AVLDfvfy998VZWm3cOvZBNCbA3d3YhAxZsdXjHTN4dgitT/FU o/7b5J4QzJpKe/Y5E5nFy9+p04PHbNiuXg== X-Google-Smtp-Source: AMsMyM7oLccIpi/v9qxwUmkDHcoe7hvxKAxsSIsxf98wh7AFTsFgbj85fMjja/3DvqPY106yVhT6ms4qRN+50A== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a0d:c6c7:0:b0:33d:c81b:fd14 with SMTP id i190-20020a0dc6c7000000b0033dc81bfd14mr24325445ywd.286.1663781758584; Wed, 21 Sep 2022 10:35:58 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:40 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-5-dmatlack@google.com> Subject: [PATCH v3 04/10] KVM: x86/mmu: Handle error PFNs in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Handle error PFNs in kvm_faultin_pfn() rather than relying on the caller to invoke handle_abnormal_pfn() after kvm_faultin_pfn(). Opportunistically rename kvm_handle_bad_page() to kvm_handle_error_pfn() to make it more consistent with is_error_pfn(). This commit moves KVM closer to being able to drop handle_abnormal_pfn(), which will reduce the amount of duplicate code in the various page fault handlers. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 31b835d20762..49a5e38ecc5c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3141,7 +3141,7 @@ static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct * send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); } -static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) +static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) { /* * Do not cache the mmio info caused by writing the readonly gfn @@ -3162,10 +3162,6 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { - /* The pfn is invalid, report the error! */ - if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_bad_page(vcpu, fault->gfn, fault->pfn); - if (unlikely(!fault->slot)) { gva_t gva = fault->is_tdp ? 0 : fault->addr; @@ -4187,10 +4183,19 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + int ret; + fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq; smp_rmb(); - return __kvm_faultin_pfn(vcpu, fault); + ret = __kvm_faultin_pfn(vcpu, fault); + if (ret != RET_PF_CONTINUE) + return ret; + + if (unlikely(is_error_pfn(fault->pfn))) + return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + + return RET_PF_CONTINUE; } /* From patchwork Wed Sep 21 17:35:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C396C6FA82 for ; Wed, 21 Sep 2022 17:36:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230307AbiIURgF (ORCPT ); Wed, 21 Sep 2022 13:36:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230308AbiIURgC (ORCPT ); Wed, 21 Sep 2022 13:36:02 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B550A286C for ; Wed, 21 Sep 2022 10:36:01 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id l72-20020a63914b000000b00434ac6f8214so3797360pge.13 for ; Wed, 21 Sep 2022 10:36:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=59+B9TDT/gIROCNhGrod21Ny9r8atdEwg98pahDNmvs=; b=JXjYJEPL2haF3Dyco6hklLcIaAtU3LX2tSWPXkyuuUJ8A953BygfoinB1unBlKh4Gs rKz+phQyqrGFPT448c4ffKGXIu3wZYnRk8lAXa3ZR7KR49i0MrbLcRAlZyp9N4agaKO5 m4q6GeoAZyCc0GMcnhMWF3lnawsBsvYllY1r++tW3fQshug/7FBCT6pometAtunuc+2z Sh6qh1n4rrpB1s6hEZEjqFkfrkCRff01GtciMj1U4um6J38NzkoNFodFQr7mPQ/LnYC/ 3ExCjHqXOZdKK40UXKJpUcVVE14UwK9EnZK9/GX0YXTN/kTUwD0mrItDCPyPMQMIiSRS uwvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=59+B9TDT/gIROCNhGrod21Ny9r8atdEwg98pahDNmvs=; b=vy4aeFJ1LL2WUK3weREoOf84i68NT5tDX8Nn6WuKVb/K4wHkz/s+ITcGRn2Wn//4oU s6nDXyv/XaC2oIhdK8iWwUzXXY6K2OzsOssxpq4w7cbiDHWBrRavnZyyZjk3p6h1jceZ nfnmSxm60LrLWMb2G0Sdvg/4QWfPx1IX/RYi5ZbnkaEIBDb/dvZvWQKXVOAWZVEI5vtJ Hv5XXkREkT6pUVlV5F2mG9oC4f1f6NWoNyKAyoKRBRh+faK/vfDx17tSP3MO/TUnp6g6 aLg9RvDAObFwGsbJgvhY/fDpnj92PxoaX8TW6bGWqkzoHwP5Q3W3UN3n6p9QOhCv35wq 9xjA== X-Gm-Message-State: ACrzQf0BDvLVmc6jtL1738AxrxPoFoTMGNqfXBu11Vz62wNNiTj6Dkgc wuTEuCpKAGXTFNquDzoH+v3LIXur6859Ow== X-Google-Smtp-Source: AMsMyM5aGmEC1bXhq9ElUQRIPpPaZe7VRvlm9bPw9WOKyArG4lkzYyKCFa5zZjO2U+rxUXRIEiKY650dUY2/vw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:10a:b0:200:2849:235f with SMTP id p10-20020a17090b010a00b002002849235fmr813740pjz.1.1663781759985; Wed, 21 Sep 2022 10:35:59 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:41 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-6-dmatlack@google.com> Subject: [PATCH v3 05/10] KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON handling From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pass the kvm_page_fault struct down to kvm_handle_error_pfn() to avoid a memslot lookup when handling KVM_PFN_ERR_HWPOISON. Opportunistically move the gfn_to_hva_memslot() call and @current down into kvm_send_hwpoison_signal() to cut down on line lengths. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 49a5e38ecc5c..b6f84e470677 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3136,23 +3136,25 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return ret; } -static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct *tsk) +static void kvm_send_hwpoison_signal(struct kvm_memory_slot *slot, gfn_t gfn) { - send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); + unsigned long hva = gfn_to_hva_memslot(slot, gfn); + + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current); } -static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) +static int kvm_handle_error_pfn(struct kvm_page_fault *fault) { /* * Do not cache the mmio info caused by writing the readonly gfn * into the spte otherwise read access on readonly gfn also can * caused mmio page fault and treat it as mmio access. */ - if (pfn == KVM_PFN_ERR_RO_FAULT) + if (fault->pfn == KVM_PFN_ERR_RO_FAULT) return RET_PF_EMULATE; - if (pfn == KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(kvm_vcpu_gfn_to_hva(vcpu, gfn), current); + if (fault->pfn == KVM_PFN_ERR_HWPOISON) { + kvm_send_hwpoison_signal(fault->slot, fault->gfn); return RET_PF_RETRY; } @@ -4193,7 +4195,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return ret; if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + return kvm_handle_error_pfn(fault); return RET_PF_CONTINUE; } From patchwork Wed Sep 21 17:35:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AFFBC6FA8B for ; Wed, 21 Sep 2022 17:36:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230380AbiIURgH (ORCPT ); Wed, 21 Sep 2022 13:36:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230326AbiIURgD (ORCPT ); Wed, 21 Sep 2022 13:36:03 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1320A285F for ; Wed, 21 Sep 2022 10:36:02 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id n10-20020a170902e54a00b001782663dcaeso4265463plf.18 for ; Wed, 21 Sep 2022 10:36:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=txhHYdjk2BSCdeMy6FWl5BicUnTO60zW2ErXKNHJFxo=; b=fhUhbbm4U1KDnOOK6JMDZsP6I/dWi1kiOpam8aXaUtl8fLpRcGdxCphZBQiBKdkazJ YrpgQaDMl11lNKitU9Ja9Kmbbj+Ah6mo3nH5zwEectca7XtphXO/Mca3SvxQU28+BLl1 bAMNCQFwYa2+mpa8RnkKAvOsUXmzgwTn+cvgGF4oEhHPKkWLnyJ5lEaYgprVgyiIxFBU Zkpn28qvN83un6OYd15cQO4ikfYkdgFHxAdXSNQHJVOnmLzTOC/FF2otvXc4PgGbQeAH nucFyor09ZXV+uwPR6E6nrJ1MlFHAFeqa/KBoga8vQvOdYhme9VmMgPzC49dEOtsj6Il nIAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=txhHYdjk2BSCdeMy6FWl5BicUnTO60zW2ErXKNHJFxo=; b=XKoPnhLne947r2OnxM5Z64Q8MqrgaczfGTdBnTpEnP47uHt8QDnjKPA27Cw0s54xwe 07Ot0IYLZB1MeNCsqU6aWThD5bNVlLADm9ysaUwxlxn23kNfZ/A8E3WkKgWsnKUTV4uy xgKEFdyNfa4nfn7vQUbZEP8xT1HF9tWXfLzvq6mT2IfsrF76lANM0DS+VAeKnwpybBSv lEopDED0yrVWp+xQQT0hQFF8+u93c7KGkthpiY5OShr25kNJtHdI8rqaEGSCEVOavLJo 70H+kd0fINN/K3Qjr2Zq7sM/qxTULeeSJIivr0tgtUcmnzGr6+dH3AgpIjDlQvkXu7zI a2dw== X-Gm-Message-State: ACrzQf14CB6x2eTVi1oRrIEJviXQa9kD0ooAkIRWVUBcn5JO7oIHyL9q t9j8aVH2P8bqgOMTkBAY9LJoeBzS1AaUlQ== X-Google-Smtp-Source: AMsMyM5q5tC6BDAm6lOOyG+ZdtaapT6tr17ru8tIFimVI2kbdzaJe4QlOshpRpCKjUJ9fIA8TLAsDTrr2BlqQA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:902:ec87:b0:176:d549:2f28 with SMTP id x7-20020a170902ec8700b00176d5492f28mr5831759plg.12.1663781761949; Wed, 21 Sep 2022 10:36:01 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:42 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-7-dmatlack@google.com> Subject: [PATCH v3 06/10] KVM: x86/mmu: Handle no-slot faults in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Handle faults on GFNs that do not have a backing memslot in kvm_faultin_pfn() and drop handle_abnormal_pfn(). This eliminates duplicate code in the various page fault handlers. Opportunistically tweak the comment about handling gfn > host.MAXPHYADDR to reflect that the effect of returning RET_PF_EMULATE at that point is to avoid creating an MMIO SPTE for such GFNs. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 56 ++++++++++++++++++---------------- arch/x86/kvm/mmu/paging_tmpl.h | 6 +--- 2 files changed, 31 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b6f84e470677..e3b248385154 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3161,28 +3161,32 @@ static int kvm_handle_error_pfn(struct kvm_page_fault *fault) return -EFAULT; } -static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, - unsigned int access) +static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + unsigned int access) { - if (unlikely(!fault->slot)) { - gva_t gva = fault->is_tdp ? 0 : fault->addr; + gva_t gva = fault->is_tdp ? 0 : fault->addr; - vcpu_cache_mmio_info(vcpu, gva, fault->gfn, - access & shadow_mmio_access_mask); - /* - * If MMIO caching is disabled, emulate immediately without - * touching the shadow page tables as attempting to install an - * MMIO SPTE will just be an expensive nop. Do not cache MMIO - * whose gfn is greater than host.MAXPHYADDR, any guest that - * generates such gfns is running nested and is being tricked - * by L0 userspace (you can observe gfn > L1.MAXPHYADDR if - * and only if L1's MAXPHYADDR is inaccurate with respect to - * the hardware's). - */ - if (unlikely(!enable_mmio_caching) || - unlikely(fault->gfn > kvm_mmu_max_gfn())) - return RET_PF_EMULATE; - } + vcpu_cache_mmio_info(vcpu, gva, fault->gfn, + access & shadow_mmio_access_mask); + + /* + * If MMIO caching is disabled, emulate immediately without + * touching the shadow page tables as attempting to install an + * MMIO SPTE will just be an expensive nop. + */ + if (unlikely(!enable_mmio_caching)) + return RET_PF_EMULATE; + + /* + * Do not create an MMIO SPTE for a gfn greater than host.MAXPHYADDR, + * any guest that generates such gfns is running nested and is being + * tricked by L0 userspace (you can observe gfn > L1.MAXPHYADDR if and + * only if L1's MAXPHYADDR is inaccurate with respect to the + * hardware's). + */ + if (unlikely(fault->gfn > kvm_mmu_max_gfn())) + return RET_PF_EMULATE; return RET_PF_CONTINUE; } @@ -4183,7 +4187,8 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_CONTINUE; } -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, + unsigned int access) { int ret; @@ -4197,6 +4202,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (unlikely(is_error_pfn(fault->pfn))) return kvm_handle_error_pfn(fault); + if (unlikely(!fault->slot)) + return kvm_handle_noslot_fault(vcpu, fault, access); + return RET_PF_CONTINUE; } @@ -4247,11 +4255,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - r = kvm_faultin_pfn(vcpu, fault); - if (r != RET_PF_CONTINUE) - return r; - - r = handle_abnormal_pfn(vcpu, fault, ACC_ALL); + r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); if (r != RET_PF_CONTINUE) return r; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 98f4abce4eaf..e014e09ac2c1 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -837,11 +837,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else fault->max_level = walker.level; - r = kvm_faultin_pfn(vcpu, fault); - if (r != RET_PF_CONTINUE) - return r; - - r = handle_abnormal_pfn(vcpu, fault, walker.pte_access); + r = kvm_faultin_pfn(vcpu, fault, walker.pte_access); if (r != RET_PF_CONTINUE) return r; From patchwork Wed Sep 21 17:35:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2FCCECAAD8 for ; Wed, 21 Sep 2022 17:36:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230382AbiIURgJ (ORCPT ); Wed, 21 Sep 2022 13:36:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230243AbiIURgF (ORCPT ); Wed, 21 Sep 2022 13:36:05 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3501AA2869 for ; Wed, 21 Sep 2022 10:36:04 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3451e7b0234so57573207b3.23 for ; Wed, 21 Sep 2022 10:36:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=glnAthJ1ZVuR8FtpSWEyu9eNeNWHww8uzn3egQlazAY=; b=TGMzpnXuNZnMI3x+IULgXwfwYj9hosKR4RLofz8C2gppQZO08QnyANuPpkwOH/RG4E 0jxElkggc6/sm+0PENdaDwzAQBSRNiwCuxN0oucrJtvF9PCjpU2ZdB+RV6Tf/1DmceKW CxRUzkPaw3LolUJqHkHEfRwbqNsZooCvGhsLHoPTVglWOqeOyzAaGBwBovHHEsEJytqC q+kFbFlMfN465VNbBZy7+Sk2H1ooLvq8QwtUckfbUtHD1LHGVAw5SAtOUp7pucEs+k1w +BVVcNymBgeh173O2DRy3e8Yrj5oU8hL2NFJZubIhxPiRUF7d/WP7kOFtpbcS2AdNCPb 1byQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=glnAthJ1ZVuR8FtpSWEyu9eNeNWHww8uzn3egQlazAY=; b=GAOr6Mow/Pya1EUUioE1Cv5jtMgdSR7+KG5h5Z8Dz48lXMiVlRE17fwC2sZExoC5ud 7U+fjJX0/McRSrB34ciBxUhHWam7lYftGfQsj0kZmFXpwinF4XGu2NnNk9bc6m+mo6YK s+9QTQN7c2HcQ3rR/jtPoWLE2qh0wENo245rckp3b37yOMSS3zzrb03gJY6njH8AywSD V5EXk4dmXtZaua0PiZ1jIfRxkvUp3wfA28W5LnWHcjxk1l8Ts2Pe3b4T9RstymkmkZFq Wn/XrLGLlzPvDBPQVY8SE+WNRBVVkMqLEKxBYadqF+n62ki6hP2yAIfjy1ZGA10+1Iub P8Hg== X-Gm-Message-State: ACrzQf2sItoZYDXuMgEzBz+Q6KKJ4aUsxNbfCgdQJUdAJviolvBJB6HD YqpCGdw2p2IMrzBHzWXqY986Em+3TBFKaQ== X-Google-Smtp-Source: AMsMyM7cvFiaKL+el75IaEoofWiGhBswzRW4eqGQSRBXZzNcgm4mB1Qaau5bRhyMjH0L8CxZw6yGN2KutFvWTw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:ccd7:0:b0:6a9:72d6:a1c1 with SMTP id l206-20020a25ccd7000000b006a972d6a1c1mr25158077ybf.390.1663781763509; Wed, 21 Sep 2022 10:36:03 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:43 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-8-dmatlack@google.com> Subject: [PATCH v3 07/10] KVM: x86/mmu: Initialize fault.{gfn,slot} earlier for direct MMUs From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the initialization of fault.{gfn,slot} earlier in the page fault handling code for fully direct MMUs. This will enable a future commit to split out TDP MMU page fault handling without needing to duplicate the initialization of these 2 fields. Opportunistically take advantage of the fact that fault.gfn is initialized in kvm_tdp_page_fault() rather than recomputing it from fault->addr. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 5 +---- arch/x86/kvm/mmu/mmu_internal.h | 5 +++++ 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e3b248385154..dc203973de83 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4241,9 +4241,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); int r; - fault->gfn = fault->addr >> PAGE_SHIFT; - fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); - if (page_fault_handle_page_track(vcpu, fault)) return RET_PF_EMULATE; @@ -4347,7 +4344,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) { for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) { int page_num = KVM_PAGES_PER_HPAGE(fault->max_level); - gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1); + gfn_t base = fault->gfn & ~(page_num - 1); if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num)) break; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1c0a1e7c796d..1e91f24bd865 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -279,6 +279,11 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, }; int r; + if (vcpu->arch.mmu->root_role.direct) { + fault.gfn = fault.addr >> PAGE_SHIFT; + fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); + } + /* * Async #PF "faults", a.k.a. prefetch faults, are not faults from the * guest perspective and have already been counted at the time of the From patchwork Wed Sep 21 17:35:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1CDCC6FA82 for ; Wed, 21 Sep 2022 17:36:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230364AbiIURgL (ORCPT ); Wed, 21 Sep 2022 13:36:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230398AbiIURgI (ORCPT ); Wed, 21 Sep 2022 13:36:08 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07761A2A8E for ; Wed, 21 Sep 2022 10:36:06 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id w14-20020a170902e88e00b00177ab7a12f6so4222338plg.16 for ; Wed, 21 Sep 2022 10:36:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=NbdSnnVdKrecJYzgj3EcDEMjA4TBlJiInTSVtVLHPsA=; b=FzuUCcZ+EeNxo/gJXH++HMOi9qXMoCPN4pHXp74cexRNGqmYhXEHX+8M675UP8wTX/ b3aDUGxg61F/QDNMD9Oe9YPUKtRhtST/VlW49w+HQAPKVf71VXQ2RBZzrkrSSJKyHfMT By6BhtsODCDPfLi9Fm/SMec6U6agy0Su9oET/fa/7x4+8m1xkVhfM2lxz3J2NqC6VJn/ 0PFJymuVygGq6Dizo1YHVopWBj9Z6Q+iULIPVPTGLGdOQNYZK7/hVMORqSX6cqDp6jIY HF2dGs07N6H6671O4ywjjLF8q6y1TktFw3pxsIbWl4ilh+xf+ahKxNnZE6pfRVPtrrVq kebg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=NbdSnnVdKrecJYzgj3EcDEMjA4TBlJiInTSVtVLHPsA=; b=YJJM0ERuJP0kkJnGA/Gvkg8qVAEYWZ9DYm8uGJLFii6d4Hz61sY7hJR8sU2TEUFBFs lJ3deVPB5qdhcaNegYghapeiLlgy5/zHv7MgPGMDxA94hM+v5pkfJiYBo2cAjGcNn6d7 ntmHiLgpOHUz3klFZ++iqrtzzipBJTy0fIZbmfKlHfv+/A1XMII4lIO1f2KbAyTN3K8l MM6QvkU0qDrwVJviUYV2ZRXAGuUCT+yEz5y9lSObw9uBoN/+yH2y0y5sIh+H3950+OyL 8OOdWpapVci756yVmBCKHmL0CL3qLweAbAKXB9ZHylTE3jbUJ33+Ib1saQ3/9xC8Z8RC NEFw== X-Gm-Message-State: ACrzQf1qj7M/VGROxZFi0dmudIWRFOmU2yTshKeWabQBPM7Np+soG+N4 KVwLaaL1TTUn9XjA2UJvFMe9Gy2eh8yZ6A== X-Google-Smtp-Source: AMsMyM4mV+g/KY/H/1REw+zg+zdOi360aTtZ7ro7oqdrh+AZca6wmtXF4m/MtEVn+ZPxHvr3RSXMjB8xQnYuWA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:21c2:b0:52b:ff44:6680 with SMTP id t2-20020a056a0021c200b0052bff446680mr30194202pfj.57.1663781765255; Wed, 21 Sep 2022 10:36:05 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:44 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-9-dmatlack@google.com> Subject: [PATCH v3 08/10] KVM: x86/mmu: Split out TDP MMU page fault handling From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split out the page fault handling for the TDP MMU to a separate function. This creates some duplicate code, but makes the TDP MMU fault handler simpler to read by eliminating branches and will enable future cleanups by allowing the TDP MMU and non-TDP MMU fault paths to diverge. Only compile in the TDP MMU fault handler for 64-bit builds since kvm_tdp_mmu_map() does not exist in 32-bit builds. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 62 ++++++++++++++++++++++++++++++++---------- 1 file changed, 48 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dc203973de83..b36f351138f7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4238,7 +4238,6 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); int r; if (page_fault_handle_page_track(vcpu, fault)) @@ -4257,11 +4256,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return r; r = RET_PF_RETRY; - - if (is_tdp_mmu_fault) - read_lock(&vcpu->kvm->mmu_lock); - else - write_lock(&vcpu->kvm->mmu_lock); + write_lock(&vcpu->kvm->mmu_lock); if (is_page_fault_stale(vcpu, fault)) goto out_unlock; @@ -4270,16 +4265,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; - if (is_tdp_mmu_fault) - r = kvm_tdp_mmu_map(vcpu, fault); - else - r = __direct_map(vcpu, fault); + r = __direct_map(vcpu, fault); out_unlock: - if (is_tdp_mmu_fault) - read_unlock(&vcpu->kvm->mmu_lock); - else - write_unlock(&vcpu->kvm->mmu_lock); + write_unlock(&vcpu->kvm->mmu_lock); kvm_release_pfn_clean(fault->pfn); return r; } @@ -4327,6 +4316,46 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, } EXPORT_SYMBOL_GPL(kvm_handle_page_fault); +#ifdef CONFIG_X86_64 +static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + int r; + + if (page_fault_handle_page_track(vcpu, fault)) + return RET_PF_EMULATE; + + r = fast_page_fault(vcpu, fault); + if (r != RET_PF_INVALID) + return r; + + r = mmu_topup_memory_caches(vcpu, false); + if (r) + return r; + + r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); + if (r != RET_PF_CONTINUE) + return r; + + r = RET_PF_RETRY; + read_lock(&vcpu->kvm->mmu_lock); + + if (is_page_fault_stale(vcpu, fault)) + goto out_unlock; + + r = make_mmu_pages_available(vcpu); + if (r) + goto out_unlock; + + r = kvm_tdp_mmu_map(vcpu, fault); + +out_unlock: + read_unlock(&vcpu->kvm->mmu_lock); + kvm_release_pfn_clean(fault->pfn); + return r; +} +#endif + int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { /* @@ -4351,6 +4380,11 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } +#ifdef CONFIG_X86_64 + if (tdp_mmu_enabled) + return kvm_tdp_mmu_page_fault(vcpu, fault); +#endif + return direct_page_fault(vcpu, fault); } From patchwork Wed Sep 21 17:35:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7E46C6FA82 for ; Wed, 21 Sep 2022 17:36:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230386AbiIURgO (ORCPT ); Wed, 21 Sep 2022 13:36:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230484AbiIURgK (ORCPT ); Wed, 21 Sep 2022 13:36:10 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEEE4A2A8B for ; Wed, 21 Sep 2022 10:36:08 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33dc888dc62so57487217b3.4 for ; Wed, 21 Sep 2022 10:36:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=s6QFMlEt4rpo0Hr0B2Mg5ed+0kOUCmHtnPtKeQ4gfJs=; b=g8iLnCm6jB9cLBwUrxXEpli9TfE+x2hLxI06XC7MnVpwe2yp+Ei/YjoH49BQMkPS05 pH55ry9SiaQ4+w0IBATnI5kFAnnKqDrrqRM0bJlrkUR/tDRUpfqD6yXA1HQQTOdE64EI NZrGbhCr/jYYuBJctr8cDBYE7RdzcH7rT1vFUlKIaGp5bmLgFeWaASyXMsCUXdT4OjyO wyUgkO1J2No3PS3AZy+S4WOp4ZtH958o/KzpwW9/0UueC2G091owolHBavek/yQufVXk PY9tMumxSzUyVgfcFsl5UXf5UqeRDth39lY9TSj/EkXZx50XWuTQtE53/+fm7670mzky 1aCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=s6QFMlEt4rpo0Hr0B2Mg5ed+0kOUCmHtnPtKeQ4gfJs=; b=eGCOCDIQfkrs3nJqZz46j+LWPdL7phAj2ZnNBdLrgWIZ0qyCVf54ipM1qN3rLk85FI Bdp/GTXFvs8XlIQSk0ynOW8oqeC8LqgdP62VwdDa4KU7Hkh/Qtweshhtq3DL2xbME6UI 0AttgtPeNdlTuY2JThtuM48OXpjmSOqaP0BJfMu7fDgFSXJa2qFkiuR9wAglDzXzjUcP gXjfRC8elJNXVU4KJ3RKJJ5DemeH7NOFskpWDcMm0IO4fdzunfFwg8SIZQUTlu2q33DW KLB1Eea8teBHcM0DA2jBizexGW1Atp1ohsWuGAR59EkaPbvR8hpG9BsiqIANJylyBA5F WNdQ== X-Gm-Message-State: ACrzQf3k+tvLX6sKMTcntS4zL36oJLJLOLeG+AbBN3qcppJNF6NzI29L D/pkcS96rKEfPdxZvgwV6rSxhjcA+qijKQ== X-Google-Smtp-Source: AMsMyM4z/wX8rpVvwJHkpKCWMq1zV6f4cPYJXEyP3bZBEHeIXeTRsPjSP0wwr6FGk63doiaY4v+xMEdupk7AYQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:5484:0:b0:6b0:2e41:7659 with SMTP id i126-20020a255484000000b006b02e417659mr26131403ybb.339.1663781767985; Wed, 21 Sep 2022 10:36:07 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:45 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-10-dmatlack@google.com> Subject: [PATCH v3 09/10] KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU faults From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Stop calling make_mmu_pages_available() when handling TDP MMU faults. The TDP MMU does not participate in the "available MMU pages" tracking and limiting so calling this function is unnecessary work when handling TDP MMU faults. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b36f351138f7..4ad70fa371df 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4343,10 +4343,6 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (is_page_fault_stale(vcpu, fault)) goto out_unlock; - r = make_mmu_pages_available(vcpu); - if (r) - goto out_unlock; - r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: From patchwork Wed Sep 21 17:35:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12984052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57035ECAAD8 for ; Wed, 21 Sep 2022 17:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbiIURgR (ORCPT ); Wed, 21 Sep 2022 13:36:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230464AbiIURgO (ORCPT ); Wed, 21 Sep 2022 13:36:14 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66088A2AAD for ; Wed, 21 Sep 2022 10:36:11 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id my9-20020a17090b4c8900b002027721b2b0so8822926pjb.6 for ; Wed, 21 Sep 2022 10:36:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=frPFjutnZyX4kmWlL629KtyJA6Gga7/ub7irz0TIO9Y=; b=Uzvi5kj2R/yJWKiYENo/aY+C44+aQ4ocF2VD/h45xbhg/mKa1jQqEPY/EMgMBH7lOX CWmwV1XAlww/BG6yL3LBqU2L8A724XjPjSrnJq+2TYBHQDl/4ozcHmvExNRLDnzg6/Pm +/0bRKxmYuTMXsyGPzL0l1SNLLK+uWlQs1m8+5gVgdgbyfnrHQWXbD9KBrl8Vjbrb7Dm cKLbDq9WHHm8BS5VXc0D+fvE+a8KKNL3yHXNa2mShdStLY+dKS8LZnH+OCi0/K6NzNL3 3oTp2S5GIWICLQQHZLSqiMDxpmflVVx8fkxqaEpnYACxPw4bBd6c6CNHaWTNQ1mL5ZUy r5ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=frPFjutnZyX4kmWlL629KtyJA6Gga7/ub7irz0TIO9Y=; b=ixEsW3r79Tu1axEPQxEn2zBl464jRPfO0ZjtwnlrCM6AvB41V4YOsZiNnkkPiJbM7x boN4XWR/0Arp1issciOPps5ZOGhvZZWO++9PxKCPNFC9+nAI7XdeocGylE9pp+yIsNHt QTAmF3Y1wyZAw6+FaSIPmX1UjKa4c7hPUcfnmTUmu5DXX7kFQH2zDtZRO46hNNOslACM wItNWoZszRZ/E9wdKJ9Mn5kakIaNj23u/12cGcperq7AKtEUJBIJNKmIJ6BnyhE4tZ5A O4RFOwRN2G/yhX+kx2gDhCCCQEXpX9F4CB8RmBkywLb0BAm3qZY1ukd3BCQ+rW7fiPqr zddA== X-Gm-Message-State: ACrzQf1VffSlLjv3vtK/tXJItAPNPypVpw7Hm0U8LyDx7Du89qK3DZv8 kBCAC3UVBZwURes1MVjUlnFfGr7XKOEFDQ== X-Google-Smtp-Source: AMsMyM4h+S/GM9KT3LlvNDktvL63E2AZsbW0wHlGyODwF5WKWRZFEFmNqo7KlgIv5YLS0GC6Ydxm5mf9xHG6hQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1181:b0:203:ae0e:6a21 with SMTP id gk1-20020a17090b118100b00203ae0e6a21mr694473pjb.0.1663781769726; Wed, 21 Sep 2022 10:36:09 -0700 (PDT) Date: Wed, 21 Sep 2022 10:35:46 -0700 In-Reply-To: <20220921173546.2674386-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220921173546.2674386-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220921173546.2674386-11-dmatlack@google.com> Subject: [PATCH v3 10/10] KVM: x86/mmu: Rename __direct_map() to direct_map() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Isaku Yamahata , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename __direct_map() to direct_map() since the leading underscores are unnecessary. This also makes the page fault handler names more consistent: kvm_tdp_mmu_page_fault() calls kvm_tdp_mmu_map() and direct_page_fault() calls direct_map(). Opportunistically make some trivial cleanups to comments that had to be modified anyway since they mentioned __direct_map(). Specifically, use "()" when referring to functions, and include kvm_tdp_mmu_map() among the various callers of disallowed_hugepage_adjust(). No functional change intended. Signed-off-by: David Matlack Reviewed-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4ad70fa371df..a0b4bc3c9202 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3079,11 +3079,11 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ is_shadow_present_pte(spte) && !is_large_pte(spte)) { /* - * A small SPTE exists for this pfn, but FNAME(fetch) - * and __direct_map would like to create a large PTE - * instead: just force them to go down another level, - * patching back for them into pfn the next 9 bits of - * the address. + * A small SPTE exists for this pfn, but FNAME(fetch), + * direct_map(), or kvm_tdp_mmu_map() would like to create a + * large PTE instead: just force them to go down another level, + * patching back for them into pfn the next 9 bits of the + * address. */ u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) - KVM_PAGES_PER_HPAGE(cur_level - 1); @@ -3092,7 +3092,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ } } -static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; @@ -4265,7 +4265,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; - r = __direct_map(vcpu, fault); + r = direct_map(vcpu, fault); out_unlock: write_unlock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1e91f24bd865..b8c116ec1a89 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -198,7 +198,7 @@ struct kvm_page_fault { /* * Maximum page size that can be created for this fault; input to - * FNAME(fetch), __direct_map and kvm_tdp_mmu_map. + * FNAME(fetch), direct_map() and kvm_tdp_mmu_map(). */ u8 max_level;