From patchwork Fri Aug 26 23:12:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956692 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA257C0502A for ; Fri, 26 Aug 2022 23:12:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345025AbiHZXMh (ORCPT ); Fri, 26 Aug 2022 19:12:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230416AbiHZXMf (ORCPT ); Fri, 26 Aug 2022 19:12:35 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A84DAD2B39 for ; Fri, 26 Aug 2022 16:12:33 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31f5960500bso47248417b3.14 for ; Fri, 26 Aug 2022 16:12:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=v49+Ew3JSEV6ef3u7Q+7Y6Bw3gM4Kp077dWyGP4Qfok=; b=beMNrWBN3751tPSOvF4/C3JERxvYq21//op0ecif3fc/OiWipKnQ4Y8Nv0m/xrK57k GalJT65inmjaREO8hfWU0Fb9cMSt0qas7h1DkjiQpIyC2SXo1ljM0opnw1kV+3N0I/pJ 57K2jPAdm6hsT0FAiRapgZBE/hVawF2mQBc8GXiuWUnc3tFFhDJ5ho4ZyneMCAIF3nSS b/rmXy8RES9EjJy4vVqShVq+1EroTBz4YIasIr44dvHZJSA9gZln27aMjYGn1r4YL5iR cfvHZPGwY4xuv8Vml+qLmT3SCpe58bzk1kj/n79XxM8zSTx/cTD25aLxCSeOswfw5iAp /fZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=v49+Ew3JSEV6ef3u7Q+7Y6Bw3gM4Kp077dWyGP4Qfok=; b=1Ehv/rgDkqInVjmvlUCqfp9HMeIl43PZqyJjTP0Jv3/2yo/W84hGwfWaTOaEcs7Nc6 IeNm+aqWvZcCFTObxEUYLZaHCwTZuYSEY1L/Bh7Ppf/4AhU/Zp0y/thgYSDOREnZJNaI i/6eC/E6tPO7hPGiG4P2uLA7wxiCtbBrZh6aAVCrAP/hbAaMwtXqvDBZTS7W0FfhABnC F/Sbxx2kF22KsvQGtMFgIzetsKj4sTezLlSCy05GJYziCSKC0Rz7oL+xAWxaVrC8GkK0 2R6rw17ZIyWMJk5L+Fj1VBGGeobAGjvRKZ07ELas9vww0ryxvvlfUUuhULa9Rl0+Ar3V be0Q== X-Gm-Message-State: ACgBeo3pHmnAGq5Dx4MKVOyk05W6VDECFL6fGzAG7e4iLITD1L2pYJWt 1p7uyMf8uFSHYRcsYvMYyv8VdZehToM/NQ== X-Google-Smtp-Source: AA6agR7blLmnV/PbC5r4Jxv+U/4MWWBLmG3ssFufIVQfkJOKXzhLoB4sJWTEovaxsx4J2UuZIWH310POMWVZzQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:b8c6:0:b0:692:af14:6f99 with SMTP id g6-20020a25b8c6000000b00692af146f99mr1848698ybm.197.1661555552995; Fri, 26 Aug 2022 16:12:32 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:18 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-2-dmatlack@google.com> Subject: [PATCH v2 01/10] KVM: x86/mmu: Change tdp_mmu to a read-only parameter From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Change tdp_mmu to a read-only parameter and drop the per-vm tdp_mmu_enabled. For 32-bit KVM, make tdp_mmu_enabled a const bool so that the compiler can continue omitting calls to the TDP MMU. The TDP MMU was introduced in 5.10 and has been enabled by default since 5.15. At this point there are no known functionality gaps between the TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales better with the number of vCPUs. In other words, there is no good reason to disable the TDP MMU on a live system. Do not drop tdp_mmu=N support (i.e. do not force 64-bit KVM to always use the TDP MMU) since tdp_mmu=N is still used to get test coverage of KVM's shadow MMU TDP support, which is used in 32-bit KVM. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 9 ------ arch/x86/kvm/mmu.h | 11 +++---- arch/x86/kvm/mmu/mmu.c | 54 ++++++++++++++++++++++----------- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++---- 4 files changed, 44 insertions(+), 39 deletions(-) base-commit: 372d07084593dc7a399bf9bee815711b1fb1bcf2 prerequisite-patch-id: 2e3661ba8856c29b769499bac525b6943d9284b8 diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2c96c43c313a..d76059270a43 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1262,15 +1262,6 @@ struct kvm_arch { struct task_struct *nx_lpage_recovery_thread; #ifdef CONFIG_X86_64 - /* - * Whether the TDP MMU is enabled for this VM. This contains a - * snapshot of the TDP MMU module parameter from when the VM was - * created and remains unchanged for the life of the VM. If this is - * true, TDP MMU handler functions will run for various MMU - * operations. - */ - bool tdp_mmu_enabled; - /* * List of struct kvm_mmu_pages being used as roots. * All struct kvm_mmu_pages in the list should have diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa0..dd014bece7f0 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -230,15 +230,14 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm) } #ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } -#else -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } -#endif - +extern bool tdp_mmu_enabled; static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm); + return !tdp_mmu_enabled || kvm_shadow_root_allocated(kvm); } +#else +static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { return true; } +#endif static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e418ef3ecfcb..7caf51023d47 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -98,6 +98,16 @@ module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644); */ bool tdp_enabled = false; +bool __read_mostly tdp_mmu_allowed; + +#ifdef CONFIG_X86_64 +bool __read_mostly tdp_mmu_enabled = true; +module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444); +#else +/* TDP MMU is not supported on 32-bit KVM. */ +const bool tdp_mmu_enabled; +#endif + static int max_huge_page_level __read_mostly; static int tdp_root_level __read_mostly; static int max_tdp_level __read_mostly; @@ -1253,7 +1263,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); @@ -1286,7 +1296,7 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); @@ -1369,7 +1379,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, } } - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) write_protected |= kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn, min_level); @@ -1532,7 +1542,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); return flush; @@ -1545,7 +1555,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); return flush; @@ -1618,7 +1628,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); return young; @@ -1631,7 +1641,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); return young; @@ -3543,7 +3553,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (r < 0) goto out_unlock; - if (is_tdp_mmu_enabled(vcpu->kvm)) { + if (tdp_mmu_enabled) { root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); mmu->root.hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { @@ -5662,6 +5672,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, tdp_root_level = tdp_forced_root_level; max_tdp_level = tdp_max_root_level; +#ifdef CONFIG_X86_64 + tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled; +#endif /* * max_huge_page_level reflects KVM's MMU capabilities irrespective * of kernel support, e.g. KVM may be capable of using 1GB pages when @@ -5909,7 +5922,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * write and in the same critical section as making the reload request, * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_invalidate_all_roots(kvm); /* @@ -5934,7 +5947,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * Deferring the zap until the final reference to the root is put would * lead to use-after-free. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_zap_invalidated_roots(kvm); } @@ -6046,7 +6059,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, gfn_end, true, flush); @@ -6079,7 +6092,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level); read_unlock(&kvm->mmu_lock); @@ -6322,7 +6335,7 @@ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, u64 start, u64 end, int target_level) { - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_mmu_enabled) return; if (kvm_memslots_have_rmaps(kvm)) @@ -6343,7 +6356,7 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages; - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_mmu_enabled) return; if (kvm_memslots_have_rmaps(kvm)) { @@ -6426,7 +6439,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); @@ -6461,7 +6474,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); read_unlock(&kvm->mmu_lock); @@ -6496,7 +6509,7 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_mmu_enabled) kvm_tdp_mmu_zap_all(kvm); write_unlock(&kvm->mmu_lock); @@ -6661,6 +6674,13 @@ void __init kvm_mmu_x86_module_init(void) if (nx_huge_pages == -1) __set_nx_huge_pages(get_nx_auto_mode()); + /* + * Snapshot userspace's desire to enable the TDP MMU. Whether or not the + * TDP MMU is actually enabled is determined in kvm_configure_mmu() + * when the vendor module is loaded. + */ + tdp_mmu_allowed = tdp_mmu_enabled; + kvm_mmu_spte_module_init(); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bf2ccf9debca..e7d0f21fbbe8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -10,23 +10,18 @@ #include #include -static bool __read_mostly tdp_mmu_enabled = true; -module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); - /* Initializes the TDP MMU for the VM, if enabled. */ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; - if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) + if (!tdp_mmu_enabled) return 0; wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); if (!wq) return -ENOMEM; - /* This should not be changed for the lifetime of the VM. */ - kvm->arch.tdp_mmu_enabled = true; INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); @@ -48,7 +43,7 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!kvm->arch.tdp_mmu_enabled) + if (!tdp_mmu_enabled) return; /* Also waits for any queued work items. */ From patchwork Fri Aug 26 23:12:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88FDCECAAD5 for ; Fri, 26 Aug 2022 23:12:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345391AbiHZXMj (ORCPT ); Fri, 26 Aug 2022 19:12:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344648AbiHZXMg (ORCPT ); Fri, 26 Aug 2022 19:12:36 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B50F7D1E03 for ; Fri, 26 Aug 2022 16:12:35 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-335ff2ef600so47177627b3.18 for ; Fri, 26 Aug 2022 16:12:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=fHjungy4u15lRS8NZRY6Zq3sztnluvfGPgQcjwd048A=; b=sVIcCkktaYvCRPRlJ6xckmzEOaOjThn2cF2TKu3NPH6aCalKIyDKjQ5GmOUMB7IUcj by2Zirm51QquyqqYkMEaTnS7izONlqB9hnaR601UcdrBd5oJOJqNbGBp4Jj3RwgB3jDM 1B+oQWpseqOoRs51idQYBSjbY6xHrLp5mscUHSdFSa/srRoX2Rl7Wuqk9u2yDuo6Svue djRsQLSCgG8uxkf4ysqxYx53fERxcoxqIM84jHHo+YqKHPiegdfwLBtgOv7qvqigiyiB Qzlv4HXdppTcnY0eZidTgSL+kiS+IsLBaf4KD/9KOr3V26eYwzeMVJ7Gtb/exlVw/IDb d51A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=fHjungy4u15lRS8NZRY6Zq3sztnluvfGPgQcjwd048A=; b=X/QwXMjBTxPIj/Wye9tkd/z3nmSDSNe9bJ1GC3kVWaTpfMZgCWoybm2uPclOkdArxn bR9urXxtlPpDwjovVIS6dATVxhzreQ7LX1N2GRVXB3Ch9c2w0U2/Da/2lDg2MN/uu4OD NIJ2KWuEza/yjN6IkERO7Qg0S+WYhQZr6zqQ7m6eOcdh5HZaf+Z01hWDc2DvtgDkFWlf 4g7q15D9iqJNlTEQD1dKQ9fE6pJfB5LALo/hbvvwD8rSdCOIcDRAgscp5qE3IO5JgI5G J3hTLYdRYaTf1o+1NUm0X2lCA9vPfZ2fKQf1XY/+xrZaYWQQWqL200fQXCAk9ITYEpl0 F92g== X-Gm-Message-State: ACgBeo3AxhvK2RjF90jznojaQxFUNssFs2LoUN6Pe1qGS7dxnznNf0WW cE0bFbEUtuW13LQyQtFbXBwDxhcAxJCibQ== X-Google-Smtp-Source: AA6agR62Kgc0jBZIktM79tweuYKQtDzd05suXI9xlwRj4vbGV4C0XMflf6PKclJmDeemlqHNgN0Ghlzw+jrdaA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6902:1148:b0:695:a10d:15a7 with SMTP id p8-20020a056902114800b00695a10d15a7mr1709340ybu.582.1661555555052; Fri, 26 Aug 2022 16:12:35 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:19 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-3-dmatlack@google.com> Subject: [PATCH v2 02/10] KVM: x86/mmu: Move TDP MMU VM init/uninit behind tdp_mmu_enabled From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move kvm_mmu_{init,uninit}_tdp_mmu() behind tdp_mmu_enabled. This makes these functions consistent with the rest of the calls into the TDP MMU from mmu.c, and which is now possible since tdp_mmu_enabled is only modified when the x86 vendor module is loaded. i.e. It will never change during the lifetime of a VM. This change also enabled removing the stub definitions for 32-bit KVM, as the compiler will just optimize the calls out like it does for all the other TDP MMU functions. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 11 +++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 6 ------ arch/x86/kvm/mmu/tdp_mmu.h | 7 +++---- 3 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7caf51023d47..ff428152abce 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5973,9 +5973,11 @@ int kvm_mmu_init_vm(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); - r = kvm_mmu_init_tdp_mmu(kvm); - if (r < 0) - return r; + if (tdp_mmu_enabled) { + r = kvm_mmu_init_tdp_mmu(kvm); + if (r < 0) + return r; + } node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; @@ -6005,7 +6007,8 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) kvm_page_track_unregister_notifier(kvm, node); - kvm_mmu_uninit_tdp_mmu(kvm); + if (tdp_mmu_enabled) + kvm_mmu_uninit_tdp_mmu(kvm); mmu_free_vm_memory_caches(kvm); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e7d0f21fbbe8..08ab3596dfaa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -15,9 +15,6 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; - if (!tdp_mmu_enabled) - return 0; - wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); if (!wq) return -ENOMEM; @@ -43,9 +40,6 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!tdp_mmu_enabled) - return; - /* Also waits for any queued work items. */ destroy_workqueue(kvm->arch.tdp_mmu_zap_wq); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..9d086a103f77 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -5,6 +5,9 @@ #include +int kvm_mmu_init_tdp_mmu(struct kvm *kvm); +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) @@ -66,8 +69,6 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte); #ifdef CONFIG_X86_64 -int kvm_mmu_init_tdp_mmu(struct kvm *kvm); -void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) @@ -87,8 +88,6 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu) return sp && is_tdp_mmu_page(sp) && sp->root_count; } #else -static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; } -static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } #endif From patchwork Fri Aug 26 23:12:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D20DECAAD4 for ; Fri, 26 Aug 2022 23:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344648AbiHZXMk (ORCPT ); Fri, 26 Aug 2022 19:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345227AbiHZXMj (ORCPT ); Fri, 26 Aug 2022 19:12:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBCCBD87F2 for ; Fri, 26 Aug 2022 16:12:37 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-337ed9110c2so47550897b3.15 for ; Fri, 26 Aug 2022 16:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=/h7NR+5YLyhr7liO75I183qpM/K3LGiktPqMTh28caI=; b=cTqFFfUv3s+E5rT2vmVjFjL0iW8RaTYjR7jvOHGOlbuuTcMS2mXO1qDK034sO4mS46 f49VsEstZ47cDB1zy8uafdkL/vBccLB7/ENO19TIbzjHQZO4RyxxEXm4mHq9smvtWxkN C9ca8IeaEfvbfPCSXBEwkDkenZDFow0OAamSopW6md7rXI1x01LFFv76XfpCSdoUaEER baBi5Yish8+iH9CP1H5swqrIBvoph7/w21L86hC3KJAPvYz3mO3andEDceFy2pIyJ+5o F4b3inIQV4vxKNcHEx3g+YbGkIfgeeJ7Nz2nsdqV/N9+Ath7DcVk+oDEnSxkjWCe4fT9 d/kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=/h7NR+5YLyhr7liO75I183qpM/K3LGiktPqMTh28caI=; b=1Q2CLNuRhPrULnKFQw6kMLutnva49d/brtPTsgkOE3rdqgDy4O+Ds/EnLgCz1KEgqh rUJTkZXx9xhtKCt8p3WQCCLPUGKRRwBmKcQKKQRXd2a5dPBxZ+8vrRhP5Q+klUVk9vot oKxRya6UBwhSqy9iM4MMOVBa/iEdQRcVF8dQNv0tOfrNggu5sPON3LtMnry1E6KqE5xa E7jC/LEdM6qtTGYJQhW0lzHqLEvqt7jMdatT/XmdptwFsg40URtFbhyU2+wxjJ05GuJG tOu/rJKyycxE4nCSoJE9vjUOomaMUt054ZdSIRkTIJER4GFIh/1VYOfmAdlxf23UuCJ4 uRYQ== X-Gm-Message-State: ACgBeo3PFGGCmpAH5SzGQrpNp2U2PbFcY1imJ5cop7Gu9dRmNtBjWZjk 79aRwKpoVKuJqKC5yT7ugjh1rFCK3vklCg== X-Google-Smtp-Source: AA6agR755Iw6DLHa8JCqMzw05jZNiAzNiLEAtht3shAKopXpRaSQpvQ7ZmrOKzEc0X2VfEeqy2tFzK7Ed3aoXg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a0d:df0a:0:b0:33d:ab83:e816 with SMTP id i10-20020a0ddf0a000000b0033dab83e816mr1986660ywe.187.1661555557229; Fri, 26 Aug 2022 16:12:37 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:20 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-4-dmatlack@google.com> Subject: [PATCH v2 03/10] KVM: x86/mmu: Grab mmu_invalidate_seq in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Grab mmu_invalidate_seq in kvm_faultin_pfn() and stash it in struct kvm_page_fault. The eliminates duplicate code and reduces the amount of parameters needed for is_page_fault_stale(). Preemptively split out __kvm_faultin_pfn() to a separate function for use in subsequent commits. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 6 +----- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ff428152abce..49dbe274c709 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4132,7 +4132,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; bool async; @@ -4188,12 +4188,20 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return RET_PF_CONTINUE; } +static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +{ + fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq; + smp_rmb(); + + return __kvm_faultin_pfn(vcpu, fault); +} + /* * Returns true if the page fault is stale and needs to be retried, i.e. if the * root was invalidated by a memslot update or a relevant mmu_notifier fired. */ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault, int mmu_seq) + struct kvm_page_fault *fault) { struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root.hpa); @@ -4213,14 +4221,12 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); - - unsigned long mmu_seq; int r; fault->gfn = fault->addr >> PAGE_SHIFT; @@ -4237,9 +4243,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - mmu_seq = vcpu->kvm->mmu_invalidate_seq; - smp_rmb(); - r = kvm_faultin_pfn(vcpu, fault); if (r != RET_PF_CONTINUE) return r; @@ -4255,7 +4258,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else write_lock(&vcpu->kvm->mmu_lock); - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; r = make_mmu_pages_available(vcpu); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..1c0a1e7c796d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -221,6 +221,7 @@ struct kvm_page_fault { struct kvm_memory_slot *slot; /* Outputs of kvm_faultin_pfn. */ + unsigned long mmu_seq; kvm_pfn_t pfn; hva_t hva; bool map_writable; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 39e0205e7300..98f4abce4eaf 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -791,7 +791,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault { struct guest_walker walker; int r; - unsigned long mmu_seq; bool is_self_change_mapping; pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_code); @@ -838,9 +837,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else fault->max_level = walker.level; - mmu_seq = vcpu->kvm->mmu_invalidate_seq; - smp_rmb(); - r = kvm_faultin_pfn(vcpu, fault); if (r != RET_PF_CONTINUE) return r; @@ -871,7 +867,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = RET_PF_RETRY; write_lock(&vcpu->kvm->mmu_lock); - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; r = make_mmu_pages_available(vcpu); From patchwork Fri Aug 26 23:12:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48577C0502A for ; Fri, 26 Aug 2022 23:12:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345427AbiHZXMm (ORCPT ); Fri, 26 Aug 2022 19:12:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345394AbiHZXMk (ORCPT ); Fri, 26 Aug 2022 19:12:40 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E6D9E116D for ; Fri, 26 Aug 2022 16:12:40 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-340862314d9so32339897b3.3 for ; Fri, 26 Aug 2022 16:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=eGJLCnwjODhnjhv2zAcr4ml0oV6QsKT15HaCTsjDd4Q=; b=WQ1RxM3AI/PQhuLFMbbYDftuVij1c/dOxuz2zfRr7OwgvLfK8ovFdnFDBFOcmNP4Yu tXqYb7fUt9CO/2VtGskkaJTGf4sk4/IcopL6EdUCFAvd4SBEpOrgf6321ho88K1QyXYm gByFrUGzyv8Q+r1DFkANIvow3SIllFJwm93ilr0j2N/AJ2uwmfa714IqbwCfwNH6EuCv edoX+tvVDxpWyQxkGXufFgu7Zon0daHBlYLcSb+cBYT1AVmD26pBTgPN9Id4OBrk4MlX Tvp6PeNQ9i8rhPeCzkBzIK8nvofzb0CjAWCMVLwcb1viRkxzLtlmy3JI3Cjs2H5XJpC9 Ur5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=eGJLCnwjODhnjhv2zAcr4ml0oV6QsKT15HaCTsjDd4Q=; b=KCir0m+ZOFEUBcfWOWuJpWXAufWfw5pUn6izUgnjlpmF9oBJYEmLAbDef5cgxPCXMN aTI0m+/v0ntmgq48ME0ETuVC6FwvE3B1nwGHRd9vrDXGtdgnqHxPwXjHBPie87OdQh9Z Mxzk5dvscx5Zgt1fagsb9fYPSwp//k9FOK0TIbS+pn5CAq23p4xAOUcvRB+5I3WV28yc LFhNB+IqH+W/ALxvPLOgcvXeXxaQfQzMJ3k6ZXK7GOI48pxJmj74VBQqaG4n9akA8tVg Th8sx6GTJkTkU6X1VTFOeTJYU/U8PCdOxQl/sFZILFR0e8iP1s3pGSsPcCI+5xHZl7Nw n7UA== X-Gm-Message-State: ACgBeo3GcPVszaCVP9+gC6mIyXGdC3joGdDC5BYO+ln9i6q7EHigDfV+ zUhz07oFuacwyfaEnbqxg9pdF0609soxCg== X-Google-Smtp-Source: AA6agR6WtAXquEKDfd/xwbohmWk7T+APzoRAyShNZdmwOjMl0pVNqoPblKAjvcgGfVGgkBTr2cZTE/KHJ1eUkA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:b71c:0:b0:340:bb98:fb38 with SMTP id v28-20020a81b71c000000b00340bb98fb38mr173433ywh.428.1661555559408; Fri, 26 Aug 2022 16:12:39 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:21 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-5-dmatlack@google.com> Subject: [PATCH v2 04/10] KVM: x86/mmu: Handle error PFNs in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Handle error PFNs in kvm_faultin_pfn() rather than relying on the caller to invoke handle_abnormal_pfn() after kvm_faultin_pfn(). Opportunistically rename kvm_handle_bad_page() to kvm_handle_error_pfn() to make it more consistent with is_error_pfn(). This commit moves KVM closer to being able to drop handle_abnormal_pfn(), which will reduce the amount of duplicate code in the various page fault handlers. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 49dbe274c709..273e1771965c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3144,7 +3144,7 @@ static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct * send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); } -static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) +static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) { /* * Do not cache the mmio info caused by writing the readonly gfn @@ -3165,10 +3165,6 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { - /* The pfn is invalid, report the error! */ - if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_bad_page(vcpu, fault->gfn, fault->pfn); - if (unlikely(!fault->slot)) { gva_t gva = fault->is_tdp ? 0 : fault->addr; @@ -4185,15 +4181,25 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, NULL, fault->write, &fault->map_writable, &fault->hva); + return RET_PF_CONTINUE; } static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + int ret; + fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq; smp_rmb(); - return __kvm_faultin_pfn(vcpu, fault); + ret = __kvm_faultin_pfn(vcpu, fault); + if (ret != RET_PF_CONTINUE) + return ret; + + if (unlikely(is_error_pfn(fault->pfn))) + return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + + return RET_PF_CONTINUE; } /* From patchwork Fri Aug 26 23:12:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED8CEECAAD5 for ; Fri, 26 Aug 2022 23:12:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345443AbiHZXMp (ORCPT ); Fri, 26 Aug 2022 19:12:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345433AbiHZXMo (ORCPT ); Fri, 26 Aug 2022 19:12:44 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DE54E9A9B for ; Fri, 26 Aug 2022 16:12:41 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33e1114437fso47203077b3.19 for ; Fri, 26 Aug 2022 16:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=ZdVaRkEUTVQaIKJDT/Ddx7gK3/urPJpcKKVV/+iYZKk=; b=baz/ChvqaA1ic/giek/4j38FjrCjwyfUIdIepPFprrdCeLvp8yEh6JkYWW49DK3M7C cp9S9eruQAdJLXMCSm47CFIVOMGBT6rjZ7SdU1liU4t8QVcPPRJ7+6zWILg/6yFThQR+ s99AjJN1qnr9sp/+0say2Cg1YI/CFuxC+8O+qOSjG0sepSkl29DjMc8wCqjYojIi09IV R2qpqlBjuquno+1mWai8ZWsluUZBQF0cCscGMKpUZBIEnfRLG0N0BStxNsCTfxZdaN55 Ma669WJHagR3AoxlT4AA3HErdIF8+f+Rz4cLW72U61oIqdsPe3wcRL1AgofZsVBLqAu+ Go+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=ZdVaRkEUTVQaIKJDT/Ddx7gK3/urPJpcKKVV/+iYZKk=; b=hSPCh9rQnU2huOhtjaclzDPBWA6GN4C/n8SzIuExWTE3Ag4BNNefAY6m54+KrzNxJ+ ubEl6U5Ug4K3/BBn5djC8PUKuv0Q3URjpC5FtHzu7uhfpsM+34dhezGpaa4OYra0a4DF H5KFG1AE7p7LHTm+eykBXT/qUWYjpcPnQaV2Ox2YJ1ZYWz6lYUN6oQzSeq0J1LFnzibq W3UPZ6qZXMnwI0Sejo3q0SlUSaPW62M0bPF12776niIQwSLMdNayBjW3NtjKu8WVmDEI s7WumYhvuuSTEr4UQqpl0ySmc39LbqlqFMJ58KTEj6cklBSHAoiBbB+JsiP4tVGDPEpP EyBw== X-Gm-Message-State: ACgBeo1fkheYf3uicE2ofDfH+D5eNbLkfClNjcbj0JoI2t2CHNbxKdXt oKdEX2MR9IoD4agTzX+uOVWT6cuOLy4mEw== X-Google-Smtp-Source: AA6agR7rHVGOeSA3/FNrEeO0s8W3z+dkxeb3bte6aN2PALIH3udZnZ2Pw7ShkuwEFl5arnEFIIUY6I3ulllwTA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a5b:790:0:b0:671:5d18:3a3 with SMTP id b16-20020a5b0790000000b006715d1803a3mr1801788ybq.169.1661555561261; Fri, 26 Aug 2022 16:12:41 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:22 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-6-dmatlack@google.com> Subject: [PATCH v2 05/10] KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON handling From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pass the kvm_page_fault struct down to kvm_handle_error_pfn() to avoid a memslot lookup when handling KVM_PFN_ERR_HWPOISON. Opportunistically move the gfn_to_hva_memslot() call and @current down into kvm_send_hwpoison_signal() to cut down on line lengths. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 273e1771965c..fb30451f4b47 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3139,23 +3139,25 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return ret; } -static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct *tsk) +static void kvm_send_hwpoison_signal(struct kvm_memory_slot *slot, gfn_t gfn) { - send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); + unsigned long hva = gfn_to_hva_memslot(slot, gfn); + + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current); } -static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) +static int kvm_handle_error_pfn(struct kvm_page_fault *fault) { /* * Do not cache the mmio info caused by writing the readonly gfn * into the spte otherwise read access on readonly gfn also can * caused mmio page fault and treat it as mmio access. */ - if (pfn == KVM_PFN_ERR_RO_FAULT) + if (fault->pfn == KVM_PFN_ERR_RO_FAULT) return RET_PF_EMULATE; - if (pfn == KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(kvm_vcpu_gfn_to_hva(vcpu, gfn), current); + if (fault->pfn == KVM_PFN_ERR_HWPOISON) { + kvm_send_hwpoison_signal(fault->slot, fault->gfn); return RET_PF_RETRY; } @@ -4197,7 +4199,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return ret; if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + return kvm_handle_error_pfn(fault); return RET_PF_CONTINUE; } From patchwork Fri Aug 26 23:12:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 829D6ECAAD4 for ; Fri, 26 Aug 2022 23:12:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345433AbiHZXMr (ORCPT ); Fri, 26 Aug 2022 19:12:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345444AbiHZXMq (ORCPT ); Fri, 26 Aug 2022 19:12:46 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A3D7EA144 for ; Fri, 26 Aug 2022 16:12:44 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-33dd097f993so47975537b3.10 for ; Fri, 26 Aug 2022 16:12:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=5b6tbkW/gcuFTQ/42n9E+pTWoQyrg4G/ccr6zGe/V3I=; b=E126LBnYe38LghXA6KOo/f8vh2Gz9Xy0x++mEi9UU1DPDZG1dBvp6UNu5eq1fPpqyM 8kG8b2qaJ63z70VERvWGeaZdQuecwqCg7gMbnLqNgvjhYaTKoUG9v1YM1DbXeZrlit38 K98Q0r5bazp+m5TTNnPngHSIuRepZX+HkeditJQeNALQC2rjMaDPDBrcQKsHwLgiks0T hFBw7uKmc6IBb24L34jRmOK64qtHvAmfDkbLm42zCV8SMG46jLj7Rysa4OmyxawY5QGX 3ImQ8d7NX9q7h9Ope0FAE7Zr/YaZnuuAsSr2tMJzdBPsLrhZGVU7gUW8Clt2FVz2SNWY bcGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=5b6tbkW/gcuFTQ/42n9E+pTWoQyrg4G/ccr6zGe/V3I=; b=1fKXCnTqoMUCgNbTOiWXT4PGLc7QAPQuwxWTxisYde+/pipP7bmMQDbKj265naskZS 5RY5R8KpXa3oKrpbhqbQGpnqoB6SxcotvcunaqVEn92G4UA0aD8w6oQ6E9r11IG5cEfa cfhP1Wr6KcEhf5+Bf+vaLaoXng5g3m71QmiLwrLrHEzd0VDHVbabevcLwz6SDARvEuRI pxeIQETnMvfjphuFkxDJb4FCPSMT0tfRbZCscPGm42u//69ILsWgtgLFhp3QMCEDuPRP Lno+qWNFonR0cKq3U/8/3dljIwed0yyWORGevrcqh3YxrCayUfm+b6rTeFEUBg961/ic Sw9Q== X-Gm-Message-State: ACgBeo08kf7tB5+/qcSMXT9E6DLiE4ZxZFh6sNL59ym4psGquNIjrH7+ Vsb06WFknHNVv476ANjjYRtz3GuIUILYmA== X-Google-Smtp-Source: AA6agR5E/w+rSFNHY4wQXj7GqF6U/qnnxfbqRlV+1G2PWK0Yn6P4TRreo6zBy6IOfpAkorowAGPsJJ2JxxDUiA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a5b:c4c:0:b0:696:114c:ad25 with SMTP id d12-20020a5b0c4c000000b00696114cad25mr1670427ybr.13.1661555563375; Fri, 26 Aug 2022 16:12:43 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:23 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-7-dmatlack@google.com> Subject: [PATCH v2 06/10] KVM: x86/mmu: Handle no-slot faults in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Handle faults on GFNs that do not have a backing memslot in kvm_faultin_pfn() and drop handle_abnormal_pfn(). This eliminates duplicate code in the various page fault handlers. Opportunistically tweak the comment about handling gfn > host.MAXPHYADDR to reflect that the effect of returning RET_PF_EMULATE at that point is to avoid creating an MMIO SPTE for such GFNs. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 56 ++++++++++++++++++---------------- arch/x86/kvm/mmu/paging_tmpl.h | 6 +--- 2 files changed, 31 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fb30451f4b47..86282df37217 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3164,28 +3164,32 @@ static int kvm_handle_error_pfn(struct kvm_page_fault *fault) return -EFAULT; } -static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, - unsigned int access) +static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + unsigned int access) { - if (unlikely(!fault->slot)) { - gva_t gva = fault->is_tdp ? 0 : fault->addr; + gva_t gva = fault->is_tdp ? 0 : fault->addr; - vcpu_cache_mmio_info(vcpu, gva, fault->gfn, - access & shadow_mmio_access_mask); - /* - * If MMIO caching is disabled, emulate immediately without - * touching the shadow page tables as attempting to install an - * MMIO SPTE will just be an expensive nop. Do not cache MMIO - * whose gfn is greater than host.MAXPHYADDR, any guest that - * generates such gfns is running nested and is being tricked - * by L0 userspace (you can observe gfn > L1.MAXPHYADDR if - * and only if L1's MAXPHYADDR is inaccurate with respect to - * the hardware's). - */ - if (unlikely(!enable_mmio_caching) || - unlikely(fault->gfn > kvm_mmu_max_gfn())) - return RET_PF_EMULATE; - } + vcpu_cache_mmio_info(vcpu, gva, fault->gfn, + access & shadow_mmio_access_mask); + + /* + * If MMIO caching is disabled, emulate immediately without + * touching the shadow page tables as attempting to install an + * MMIO SPTE will just be an expensive nop. + */ + if (unlikely(!enable_mmio_caching)) + return RET_PF_EMULATE; + + /* + * Do not create an MMIO SPTE for a gfn greater than host.MAXPHYADDR, + * any guest that generates such gfns is running nested and is being + * tricked by L0 userspace (you can observe gfn > L1.MAXPHYADDR if and + * only if L1's MAXPHYADDR is inaccurate with respect to the + * hardware's). + */ + if (unlikely(fault->gfn > kvm_mmu_max_gfn())) + return RET_PF_EMULATE; return RET_PF_CONTINUE; } @@ -4187,7 +4191,8 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_CONTINUE; } -static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, + unsigned int access) { int ret; @@ -4201,6 +4206,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (unlikely(is_error_pfn(fault->pfn))) return kvm_handle_error_pfn(fault); + if (unlikely(!fault->slot)) + return kvm_handle_noslot_fault(vcpu, fault, access); + return RET_PF_CONTINUE; } @@ -4251,11 +4259,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - r = kvm_faultin_pfn(vcpu, fault); - if (r != RET_PF_CONTINUE) - return r; - - r = handle_abnormal_pfn(vcpu, fault, ACC_ALL); + r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); if (r != RET_PF_CONTINUE) return r; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 98f4abce4eaf..e014e09ac2c1 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -837,11 +837,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else fault->max_level = walker.level; - r = kvm_faultin_pfn(vcpu, fault); - if (r != RET_PF_CONTINUE) - return r; - - r = handle_abnormal_pfn(vcpu, fault, walker.pte_access); + r = kvm_faultin_pfn(vcpu, fault, walker.pte_access); if (r != RET_PF_CONTINUE) return r; From patchwork Fri Aug 26 23:12:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20060ECAAD5 for ; Fri, 26 Aug 2022 23:12:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345444AbiHZXMv (ORCPT ); Fri, 26 Aug 2022 19:12:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345436AbiHZXMt (ORCPT ); Fri, 26 Aug 2022 19:12:49 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB30FEA14F for ; Fri, 26 Aug 2022 16:12:46 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-33da75a471cso47479617b3.20 for ; Fri, 26 Aug 2022 16:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=uCyv6i8mEx63HGeVnpkclCAYwfuPo59tGsDRd2/sWwE=; b=E49sNwDQnr+V5Dgqc1sIHJdcP8ACoS/XxKW4bwe4yMifG3KoVutECUp47PrvG44pJR YxWcNMQj/1xulfE7ESYhrtlFbS4a+acCa0EJwRO7uBNkJPtQLl7Guy7bH54VFJoGM0df 1SJLu7QZ39qpjKJQuelTYYFnoTy+FoWwBCJoLCz8sUoM0LT6Zu6giq5M1X8NIwMepbC7 7uSZIhMRx+ogfRmUL2TNAdiE3cOQHCvdk+szpXZmNJ6Z65Mba66d+AvrYBwtcfDtWUJL h0GiBm/XkCTlzYPiVx3zKJ2pXJCG9bf04opxPXkzIK66wB2e3oumEDtFte1dhBJ6Lki6 oNLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=uCyv6i8mEx63HGeVnpkclCAYwfuPo59tGsDRd2/sWwE=; b=Gm3+259Qct99pO0aQDfxl7u58ADGR/SgmNNlusnStjIzZAbl5D68TREkwD7+yYYZDJ 2LGxz7oLFOtUP7In6Z2bDt97piV+sDp6gGqwICIRrZ6m845B9C09IEL/mAl99XsmZ3Kj 67++5/A/tIqIHWBTn75qsIwZxFOcU18jt01WA+YHgldVLAKVSvcViqITqI/bXm8oIthT WhpikShx4DykHDUuG7+WHTFqehxjLJSK3Ei+SHOBScKM7t02zNCShVP9oYuY0QDcKxXo 3vsUToUJuYIWUiHBK2wX0gqi4FZav5kKuIRMFOY815TPkLQnv6FOAJqbvQIz9mkKphGA iyGA== X-Gm-Message-State: ACgBeo1QAdHRRXjLfHec90+7vlwV/HyqGwzwyIrZb3/f/bGL3pG2IZkB vjDJWfDH6rOoxTsTy0Sb54beOJOs8UnKWA== X-Google-Smtp-Source: AA6agR7AIZCbunQyDG1/dtX86PT7xaywbXtVKJQvAgQyBPt0WhDatNCajH8y4EQstnyFIpIqyX8lwbbFTDiQVg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:1058:0:b0:336:d111:278 with SMTP id 85-20020a811058000000b00336d1110278mr2093338ywq.140.1661555565760; Fri, 26 Aug 2022 16:12:45 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:24 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-8-dmatlack@google.com> Subject: [PATCH v2 07/10] KVM: x86/mmu: Initialize fault.{gfn,slot} earlier for direct MMUs From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the initialization of fault.{gfn,slot} earlier in the page fault handling code for fully direct MMUs. This will enable a future commit to split out TDP MMU page fault handling without needing to duplicate the initialization of these 2 fields. Opportunistically take advantage of the fact that fault.gfn is initialized in kvm_tdp_page_fault() rather than recomputing it from fault->addr. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 5 +---- arch/x86/kvm/mmu/mmu_internal.h | 5 +++++ 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 86282df37217..a185599f4d1d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4245,9 +4245,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); int r; - fault->gfn = fault->addr >> PAGE_SHIFT; - fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); - if (page_fault_handle_page_track(vcpu, fault)) return RET_PF_EMULATE; @@ -4351,7 +4348,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) { for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) { int page_num = KVM_PAGES_PER_HPAGE(fault->max_level); - gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1); + gfn_t base = fault->gfn & ~(page_num - 1); if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num)) break; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1c0a1e7c796d..1e91f24bd865 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -279,6 +279,11 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, }; int r; + if (vcpu->arch.mmu->root_role.direct) { + fault.gfn = fault.addr >> PAGE_SHIFT; + fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); + } + /* * Async #PF "faults", a.k.a. prefetch faults, are not faults from the * guest perspective and have already been counted at the time of the From patchwork Fri Aug 26 23:12:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D763ECAAD4 for ; Fri, 26 Aug 2022 23:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345472AbiHZXM6 (ORCPT ); Fri, 26 Aug 2022 19:12:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345462AbiHZXMy (ORCPT ); Fri, 26 Aug 2022 19:12:54 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 783E8EA15E for ; Fri, 26 Aug 2022 16:12:49 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33dce8cae71so47462897b3.8 for ; Fri, 26 Aug 2022 16:12:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=AVYzpP9SQOKhSSSBUzsWiGl4yuBhwPekHC7JWeKXREI=; b=Yrf8rRq/eon11p5t9Ulb1jGGrlSx+nSm3Du8oXVtXZsBtxduOh2OJGWi8rZlRhfDDb L/N2mtfyulOhx9n77fKS+dzTMa/JvGmPnAKj/0SJCPUydtxopeTUzKYqM/iYFGwcYcBp sHC77T0dBO8U1sMjeggPsA8GNd83QNKl0mS85YXLTTu06CsqtLRTPGJ0WSwMz4i+DgMn wAWAkXcUaQ92moBON8/yPzMAtZ/c/b7Zp4xIO2tBHikbd+APLPOxdK7aB2o0+8kB7DDw 9kn3iKz2H7YmTrejvCXb40IBpY4H+o60IO0hOclechx4j/x8E4/ZfOlwgAFicmWVtT/6 nffA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=AVYzpP9SQOKhSSSBUzsWiGl4yuBhwPekHC7JWeKXREI=; b=ApmPcvYwk1BX4qlEcxz9V2t50CUkUFWQ42GVnLi1I1OdIeEnI+C1Y1jtSo7JvBtpGL 8ZAkBuxnVVBuJAclfoulqQ1KlG8oP1HIGMcZFEY/okJqvAy0WuZ8WNF4szCUmDLNJi2+ 7t6laodHM/Js9zTnn0jkC9aFFdXe2E/orad7xAWnQVjRRR/Ax/f8d0PKB+MPMequ/ta0 yN+W1n3fdsXZV9GZWGl+v3ioHlZSy/uCoxWcINDMJM0Is45uiUf0v+emYx/rzfQTvsuv IIhwI7RrTgYI4vfdLicB96DMBvYipbimgpQ6oNYs5RYGqIyFTxm4XeaaznGNUFxmvNyz SVQw== X-Gm-Message-State: ACgBeo0XtspjS0d5QRwFf0/tnIxUShIs4qtBAqbf4bO9GYf22lf6/27w 34tyybtsGGFHvOqt1R/qKHZGqm9dRP+rNA== X-Google-Smtp-Source: AA6agR7m34Rtgwk6vDLmffsUvRs5l3AjI7VFVCYydONJ0pF3u6CoQhT/AjGiSrZINtuXbEA7ihEMcyZ4CIHS5w== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:5c43:0:b0:328:b796:654f with SMTP id q64-20020a815c43000000b00328b796654fmr2016573ywb.18.1661555567710; Fri, 26 Aug 2022 16:12:47 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:25 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-9-dmatlack@google.com> Subject: [PATCH v2 08/10] KVM: x86/mmu: Split out TDP MMU page fault handling From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split out the page fault handling for the TDP MMU to a separate function. This creates some duplicate code, but makes the TDP MMU fault handler simpler to read by eliminating branches and will enable future cleanups by allowing the TDP MMU and non-TDP MMU fault paths to diverge. Only compile in the TDP MMU fault handler for 64-bit builds since kvm_tdp_mmu_map() does not exist in 32-bit builds. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 62 ++++++++++++++++++++++++++++++++---------- 1 file changed, 48 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a185599f4d1d..8f124a23ab4c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4242,7 +4242,6 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); int r; if (page_fault_handle_page_track(vcpu, fault)) @@ -4261,11 +4260,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return r; r = RET_PF_RETRY; - - if (is_tdp_mmu_fault) - read_lock(&vcpu->kvm->mmu_lock); - else - write_lock(&vcpu->kvm->mmu_lock); + write_lock(&vcpu->kvm->mmu_lock); if (is_page_fault_stale(vcpu, fault)) goto out_unlock; @@ -4274,16 +4269,10 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; - if (is_tdp_mmu_fault) - r = kvm_tdp_mmu_map(vcpu, fault); - else - r = __direct_map(vcpu, fault); + r = __direct_map(vcpu, fault); out_unlock: - if (is_tdp_mmu_fault) - read_unlock(&vcpu->kvm->mmu_lock); - else - write_unlock(&vcpu->kvm->mmu_lock); + write_unlock(&vcpu->kvm->mmu_lock); kvm_release_pfn_clean(fault->pfn); return r; } @@ -4331,6 +4320,46 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, } EXPORT_SYMBOL_GPL(kvm_handle_page_fault); +#ifdef CONFIG_X86_64 +int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + int r; + + if (page_fault_handle_page_track(vcpu, fault)) + return RET_PF_EMULATE; + + r = fast_page_fault(vcpu, fault); + if (r != RET_PF_INVALID) + return r; + + r = mmu_topup_memory_caches(vcpu, false); + if (r) + return r; + + r = kvm_faultin_pfn(vcpu, fault, ACC_ALL); + if (r != RET_PF_CONTINUE) + return r; + + r = RET_PF_RETRY; + read_lock(&vcpu->kvm->mmu_lock); + + if (is_page_fault_stale(vcpu, fault)) + goto out_unlock; + + r = make_mmu_pages_available(vcpu); + if (r) + goto out_unlock; + + r = kvm_tdp_mmu_map(vcpu, fault); + +out_unlock: + read_unlock(&vcpu->kvm->mmu_lock); + kvm_release_pfn_clean(fault->pfn); + return r; +} +#endif + int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { /* @@ -4355,6 +4384,11 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } +#ifdef CONFIG_X86_64 + if (tdp_mmu_enabled) + return kvm_tdp_mmu_page_fault(vcpu, fault); +#endif + return direct_page_fault(vcpu, fault); } From patchwork Fri Aug 26 23:12:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67371ECAAD5 for ; Fri, 26 Aug 2022 23:12:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345459AbiHZXM5 (ORCPT ); Fri, 26 Aug 2022 19:12:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345442AbiHZXMw (ORCPT ); Fri, 26 Aug 2022 19:12:52 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32EC7EA174 for ; Fri, 26 Aug 2022 16:12:50 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-337ed9110c2so47556047b3.15 for ; Fri, 26 Aug 2022 16:12:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=wRixm1YVhAJ8xZe1pU1E4V06yoGJLAsGACDB692LvKs=; b=a14Azz3UWEGB1TFiwilQsSdjjeYlNQ4qVw7ykdMcGxGgW/6E3YbcqSOVMYquc+yorh 7AjBfh2BJwFrNUrfuPm2NON91/U3D4qGTVfzFEmgRCYl4aISPTJmBVXpNEY/3aVgsPSL yiziBrUYY2bvfLKvv8+989Eb4Jg2eI3qz/iVBrs+p7TW32dTqnjMKjfFdmZUCdkQH0cD VKnNFkWNbek7VbykO5qfr6N74Sw89nsNQeC8NmWs4FzETAVR31NWebvmxMQ9VyNxp2we AYF8ef9A+vKc8WAFUDnIJ5XawlK8mtISATVaGM08bCuTZnvvOcWCVTuA0MsZGzfa4uRE /4Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=wRixm1YVhAJ8xZe1pU1E4V06yoGJLAsGACDB692LvKs=; b=UTJf5KJW2bMElfP4YMax4plPW+LXEKtaigVwyN8/AHzVJrrj8n/RLIOrvKeVakZHqC 2YFtpvtC03gp+krV/1vFH6q6SpBVxdNkMiiEqaEYQ6aIAdpbjGSy9CansLjZGKYjHtGo 5TkYJgx7VRtG4oyVcyVXyVEgkC5KmuIIp7nA6uubz5yOJEale9xgGWlHmkkr7B7KyUok /vLB7z5V7Zn75oRfWMgDRqC8zAGVRhefvdutxpxuhUsOwtAE/niBKLGr1p+kAqZviMDI ofcuA6BLczNueEJ+XwMUdZtrtA7LixCVL7gHmrDv+TYiUwSykV+aZi1qyMMYgcBPP9Ry mYiA== X-Gm-Message-State: ACgBeo1Fgq9xhJYrfixzi/lm5AsrGZsYgsilrzdRD0a2/IotIVPKW3UW gvCKe67QL3qI3D5dPh2bHGIhk2Uy9CWiCg== X-Google-Smtp-Source: AA6agR6VUHye4G1LZsaLksi0vReDzr2WREV02PyvuI7HuGwzk6GDhtr9ZRr03Ja4sHSp0q2KH3JjxHoiTQEQ6w== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:24c4:0:b0:696:3e03:2d0e with SMTP id k187-20020a2524c4000000b006963e032d0emr1785534ybk.104.1661555569856; Fri, 26 Aug 2022 16:12:49 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:26 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-10-dmatlack@google.com> Subject: [PATCH v2 09/10] KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU faults From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Stop calling make_mmu_pages_available() when handling TDP MMU faults. The TDP MMU does not participate in the "available MMU pages" tracking and limiting so calling this function is unnecessary work when handling TDP MMU faults. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8f124a23ab4c..803aed2c0e74 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4347,10 +4347,6 @@ int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (is_page_fault_stale(vcpu, fault)) goto out_unlock; - r = make_mmu_pages_available(vcpu); - if (r) - goto out_unlock; - r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: From patchwork Fri Aug 26 23:12:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12956701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0091ECAAD4 for ; Fri, 26 Aug 2022 23:13:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345460AbiHZXNN (ORCPT ); Fri, 26 Aug 2022 19:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345470AbiHZXNF (ORCPT ); Fri, 26 Aug 2022 19:13:05 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6599CEA314 for ; Fri, 26 Aug 2022 16:12:52 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-33dd097f993so47979367b3.10 for ; Fri, 26 Aug 2022 16:12:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=Kff/0R5ZAb3eLKnoS/NhGeKWqIVOtWmm3mC7ca1Wnx0=; b=ixJZSKVKYIY0ufhPfhIBQMcpSQDUn+9ZD2MAsqeYZdMVGdshQ61gEcp4PYT0sYiVc2 vw3hxhQEDlls3DH7HpeeJ18HVa+IBzAnbLPB1IRhSywYmZwYt6vSgVmLD7cBw2tuVlRn ct5G9x6fogv27y461CRdh1djWGuYEfbmbFRGYR6t8Dlfs9bpHqeV3PbgI8G07+zkOI4w 0TN2RzBxAFuVTSCBVyHN+O6vogn9loFUiSrVk8LTVatcGjlbNs98lkL/FXPWnjD5gGHy O0cBSRpyDIs2+L89qsRBdk+DsCKngf6xSx5I6Ps8Bjc62LMx30xcCb1RrQyjJ/H2/2WG Fmuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=Kff/0R5ZAb3eLKnoS/NhGeKWqIVOtWmm3mC7ca1Wnx0=; b=JvHR1G9UOvbgqlXNi5El7PYHdGuHNGjJ368hB+DFDuH2oh56Nf8T1+3KZ3X+L9q8x8 JxlPnzdKlvdyo9ZJ/OG0Lvileykpc9806ilDBJPM+O06nnFvF/PFTcMaFMBsPsbd1lGo lGo72aa88JdYvS1wElHX6fRw8YcVMmiz1Km7fhRdVtJm/793uU2ii5JbfUl1Vbx1KamH glwX5OJSssAno8+TrJ8q5KXelsSA46T+EppXT4TlzfKQYqah0TL3kqXTNl7M5lVP47L4 MK3Xo6P3YuvRtgf/6VSXPUetnVBfDlAJxHOhh8/87FcwN3oRZVAo0MWctAir7/I/hF8F YPtg== X-Gm-Message-State: ACgBeo1QbRq2rFUa8F3D3KuAWQJ6zcK7kPkaf69tALnv8TMxGUQo/Jyt 3YPUyOAwDQIdy0EUl8JtvDrRJXZl7rn9Kw== X-Google-Smtp-Source: AA6agR5MMe7z+aB+vesbd3gopa7ekwSpPGMtzaYXWCWtGv/bgz/4TA6dFHt3Hk/sN5a7sRfWLiyJ2f2gS9Hn8A== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d791:0:b0:695:9953:d27b with SMTP id o139-20020a25d791000000b006959953d27bmr1677782ybg.61.1661555571929; Fri, 26 Aug 2022 16:12:51 -0700 (PDT) Date: Fri, 26 Aug 2022 16:12:27 -0700 In-Reply-To: <20220826231227.4096391-1-dmatlack@google.com> Mime-Version: 1.0 References: <20220826231227.4096391-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826231227.4096391-11-dmatlack@google.com> Subject: [PATCH v2 10/10] KVM: x86/mmu: Rename __direct_map() to direct_map() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack , Kai Huang , Peter Xu Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename __direct_map() to direct_map() since the leading underscores are unnecessary. This also makes the page fault handler names more consistent: kvm_tdp_mmu_page_fault() calls kvm_tdp_mmu_map() and direct_page_fault() calls direct_map(). Opportunistically make some trivial cleanups to comments that had to be modified anyway since they mentioned __direct_map(). Specifically, use "()" when referring to functions, and include kvm_tdp_mmu_map() among the various callers of disallowed_hugepage_adjust(). No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 803aed2c0e74..2d68adc3bfb1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3082,11 +3082,11 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ is_shadow_present_pte(spte) && !is_large_pte(spte)) { /* - * A small SPTE exists for this pfn, but FNAME(fetch) - * and __direct_map would like to create a large PTE - * instead: just force them to go down another level, - * patching back for them into pfn the next 9 bits of - * the address. + * A small SPTE exists for this pfn, but FNAME(fetch), + * direct_map(), or kvm_tdp_mmu_map() would like to create a + * large PTE instead: just force them to go down another level, + * patching back for them into pfn the next 9 bits of the + * address. */ u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) - KVM_PAGES_PER_HPAGE(cur_level - 1); @@ -3095,7 +3095,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ } } -static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; @@ -4269,7 +4269,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; - r = __direct_map(vcpu, fault); + r = direct_map(vcpu, fault); out_unlock: write_unlock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1e91f24bd865..b8c116ec1a89 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -198,7 +198,7 @@ struct kvm_page_fault { /* * Maximum page size that can be created for this fault; input to - * FNAME(fetch), __direct_map and kvm_tdp_mmu_map. + * FNAME(fetch), direct_map() and kvm_tdp_mmu_map(). */ u8 max_level;