From patchwork Mon Aug 15 23:01:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33C4BC00140 for ; Tue, 16 Aug 2022 02:39:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229806AbiHPCjB (ORCPT ); Mon, 15 Aug 2022 22:39:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234194AbiHPCip (ORCPT ); Mon, 15 Aug 2022 22:38:45 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F1725D118 for ; Mon, 15 Aug 2022 16:01:16 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31f58599ad3so80072437b3.20 for ; Mon, 15 Aug 2022 16:01:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=4TF4un5rZ4qGVEkGftKNSbAbhoaQj1Cr6s6CkUUZi1w=; b=QBLLhl3JXTfGvCSJIYKZ+drpQI0iTzlI01nGXpQJeOTmpywjHkKfdS5BtTiCXQEc1o +Yaw5mXrZ6pgiCaWGOUt8imYwRLZottAQWfP3N8DkwxQy2h58HOwuhHa5nby4Z4XeYrc Eg6quNCPMbI4HR4dKl6E13dK10P6mOGygZi78YIavcniSG09iDYrw0wX8SOlG5UXyhDi evapb1AOSwSxcn1uDbGHRRsynK7+vJBkAre3Yylst1xpNiO+cmkyN1RC2o6PqRabYKKo wAImNWAWmMgTw0LOljVDq/ngX2XtjZG+HQ9JsB4dbEv84vaSW1wsQxV55a8II7cUAg7k LzGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=4TF4un5rZ4qGVEkGftKNSbAbhoaQj1Cr6s6CkUUZi1w=; b=Bg6sYfMXMyfyq3A/QdmqkQcCOlLlIwTxTTuGzJtF5WPYtlUZhuzbrbFnZ1uXeT7awo 3McJhklgR1d5BXm+KJLQ/eNThcynzgXcxP8Uf9/clMghuWLBKJK3OpH1Wzb/M8avvpIR n+LVCgmTswl6v3tslrM3CSbH2RbhCv95+UjSaeSusu5FetmhNvEFSjYHEQVFjnATq4ze yfJK1AzQQ05ZFCnUcr1i3PjZyBXqoZZRZZE9xdkOlKLcDG7MT4DrzdftYQsZzUOV4z+0 usnYMN+kYiG8VcI/tUbcTZ5mbH6fXPnq7yxGdQU6za99DtB4bB+0+lFMx7SqgQs41EWj HY6A== X-Gm-Message-State: ACgBeo0nSlNa73mHQ90iJCLon8pfryP8HCwkGfKecST1MUTqGR4slCHL NKiIjVLAP5tkacrvstTyo6h11Mijw14gXQ== X-Google-Smtp-Source: AA6agR5r66GeGZQ8bsPBsBT43JIL9YJadMetqiBvSQabF8GpMjl+dk1mShrUVttgwUEnmVxfi6lvKfSjEJAs1g== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a5b:5c4:0:b0:67b:89d6:cbf5 with SMTP id w4-20020a5b05c4000000b0067b89d6cbf5mr12891566ybp.286.1660604475382; Mon, 15 Aug 2022 16:01:15 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:02 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 1/9] KVM: x86/mmu: Always enable the TDP MMU when TDP is enabled From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Delete the module parameter tdp_mmu and force KVM to always use the TDP MMU when TDP hardware support is enabled. The TDP MMU was introduced in 5.10 and has been enabled by default since 5.15. At this point there are no known functionality gaps between the TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales better with the number of vCPUs. In other words, there is no good reason to disable the TDP MMU. Dropping the ability to disable the TDP MMU reduces the number of possible configurations that need to be tested to validate KVM (i.e. no need to test with tdp_mmu=N), and simplifies the code. Signed-off-by: David Matlack --- Documentation/admin-guide/kernel-parameters.txt | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 5 +---- 2 files changed, 3 insertions(+), 5 deletions(-) base-commit: 93472b79715378a2386598d6632c654a2223267b prerequisite-patch-id: 8c230105c8a2f1245dedb5b386327d98865d0bb2 prerequisite-patch-id: 9b4329037e2e880db19f3221e47d956b78acadc8 prerequisite-patch-id: 2e3661ba8856c29b769499bac525b6943d9284b8 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f7561cd494cb..e75d45a42b01 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2418,7 +2418,8 @@ the KVM_CLEAR_DIRTY ioctl, and only for the pages being cleared. - Eager page splitting is only supported when kvm.tdp_mmu=Y. + Eager page splitting is only supported when TDP hardware + support is enabled. Default is Y (on). diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bf2ccf9debca..d6c30a648d8d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -10,15 +10,12 @@ #include #include -static bool __read_mostly tdp_mmu_enabled = true; -module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); - /* Initializes the TDP MMU for the VM, if enabled. */ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; - if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) + if (!tdp_enabled) return 0; wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0); From patchwork Mon Aug 15 23:01:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B93A8C00140 for ; Tue, 16 Aug 2022 02:39:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233445AbiHPCjG (ORCPT ); Mon, 15 Aug 2022 22:39:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233712AbiHPCir (ORCPT ); Mon, 15 Aug 2022 22:38:47 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 308AF5D121 for ; Mon, 15 Aug 2022 16:01:18 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3328a211611so28490557b3.5 for ; Mon, 15 Aug 2022 16:01:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=0LjX2Gqu4N30zInKEoipkmbpT0g9ArLsgYxbfru5CIQ=; b=GYLvriG3Vd3Ii/ZhPqXAX1NTFZ/N7x1KXNuOW0dKYqtf4rAIAUsvRQJQmGQNIUaIfQ SfuSTItn4n/j02jK1VQ2LROqV17ipO16VCR0BqwYznWfWQmam8g51IYGdsGvu6RACUr9 zD4Sh9EpHsrNdzURUvW8W9nLyjmFFH8m2J4oJU84W3CFD32bCMMYfyOxFrX8m4AfnND6 fVJl8M5tf/Rd6x6bzE+kdv0TNe3S2qacijFk89aZODE4KAI3qzfkM5jzd+r5uqIvNMmL yMQBFz43XvxO24LS41XONLDDPQBaMPtylZvB9C+eAdvGqvhWUePGvi9fC2Cyb88F1PlN CI0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=0LjX2Gqu4N30zInKEoipkmbpT0g9ArLsgYxbfru5CIQ=; b=U7tmiir8QDZSkENB4WP9TtLDj3a2c0vOuaKicN8fGSZocF+xsc3KqjkmoEqIxbr4N7 wIp34/VorqaY4FAVOgmu1F5e5ivNap9fTvFJDu3fT0EqmUOOZ3+qEsy65BWXBVzIfAlo JW81qw/IDUJCFqWgfj5py4iMj1Ulco/d6vaR/82qZhS9zwRcZnoL1OSgdpCc6E2P2PrS Fp+4b7j7JHevFkduudPeZ9YsvxBYJp8EcAPJRdF8kjVAMeWdMzF85SwvJG4GS42etwxb R/G4y1B27NZyjDdVwLPWZEGoGzR7d/pndasWvl+SX761kMC+kia9NfgsyZ700tYwQzkM VnXw== X-Gm-Message-State: ACgBeo3ObAnGg6ivGPtAL+72WcbrRoYT/OI3BJJkLTz91v0uq2ww6khD sJ6Y4SRcJ38cMK/zsm8lzgwWlGjIGHNOiQ== X-Google-Smtp-Source: AA6agR4tVnsp2lbp7lzQFas+YNElWM8B7nBHyS+/aC4QSa+euZmNU9kQW/a2OXD5lD6TyZSUZ+olH8XwppgEBA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6902:154f:b0:67a:8059:a65 with SMTP id r15-20020a056902154f00b0067a80590a65mr13087096ybu.385.1660604477790; Mon, 15 Aug 2022 16:01:17 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:03 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 2/9] KVM: x86/mmu: Drop kvm->arch.tdp_mmu_enabled From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop kvm->arch.tdp_mmu_enabled and replace it with checks for tdp_enabled. The TDP MMU is unconditionally enabled when TDP hardware support is enabled, so these checks are identical. No functional change intended. Signed-off-by: David Matlack Reported-by: kernel test robot --- arch/x86/include/asm/kvm_host.h | 9 --------- arch/x86/kvm/mmu.h | 8 +------- arch/x86/kvm/mmu/mmu.c | 34 ++++++++++++++++----------------- arch/x86/kvm/mmu/tdp_mmu.c | 4 +--- 4 files changed, 19 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e8281d64a431..5243db32f954 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1262,15 +1262,6 @@ struct kvm_arch { struct task_struct *nx_lpage_recovery_thread; #ifdef CONFIG_X86_64 - /* - * Whether the TDP MMU is enabled for this VM. This contains a - * snapshot of the TDP MMU module parameter from when the VM was - * created and remains unchanged for the life of the VM. If this is - * true, TDP MMU handler functions will run for various MMU - * operations. - */ - bool tdp_mmu_enabled; - /* * List of struct kvm_mmu_pages being used as roots. * All struct kvm_mmu_pages in the list should have diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index a99acec925eb..ee3102a424aa 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -227,15 +227,9 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm) return smp_load_acquire(&kvm->arch.shadow_root_allocated); } -#ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } -#else -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } -#endif - static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm); + return !tdp_enabled || kvm_shadow_root_allocated(kvm); } static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3e1317325e1f..8c293a88d923 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1253,7 +1253,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); @@ -1286,7 +1286,7 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); @@ -1369,7 +1369,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, } } - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) write_protected |= kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn, min_level); @@ -1532,7 +1532,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); return flush; @@ -1545,7 +1545,7 @@ bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); return flush; @@ -1618,7 +1618,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); return young; @@ -1631,7 +1631,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); return young; @@ -3543,7 +3543,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) if (r < 0) goto out_unlock; - if (is_tdp_mmu_enabled(vcpu->kvm)) { + if (tdp_enabled) { root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); mmu->root.hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { @@ -5922,7 +5922,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * write and in the same critical section as making the reload request, * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) kvm_tdp_mmu_invalidate_all_roots(kvm); /* @@ -5947,7 +5947,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * Deferring the zap until the final reference to the root is put would * lead to use-after-free. */ - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) kvm_tdp_mmu_zap_invalidated_roots(kvm); } @@ -6059,7 +6059,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_enabled) { for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, gfn_end, true, flush); @@ -6095,7 +6095,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_enabled) { read_lock(&kvm->mmu_lock); flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level); read_unlock(&kvm->mmu_lock); @@ -6364,7 +6364,7 @@ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, u64 start, u64 end, int target_level) { - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_enabled) return; if (kvm_memslots_have_rmaps(kvm)) @@ -6385,7 +6385,7 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages; - if (!is_tdp_mmu_enabled(kvm)) + if (!tdp_enabled) return; if (kvm_memslots_have_rmaps(kvm)) { @@ -6470,7 +6470,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_enabled) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); read_unlock(&kvm->mmu_lock); @@ -6507,7 +6507,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, write_unlock(&kvm->mmu_lock); } - if (is_tdp_mmu_enabled(kvm)) { + if (tdp_enabled) { read_lock(&kvm->mmu_lock); flush |= kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); read_unlock(&kvm->mmu_lock); @@ -6542,7 +6542,7 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) + if (tdp_enabled) kvm_tdp_mmu_zap_all(kvm); write_unlock(&kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d6c30a648d8d..383162989645 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -22,8 +22,6 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) if (!wq) return -ENOMEM; - /* This should not be changed for the lifetime of the VM. */ - kvm->arch.tdp_mmu_enabled = true; INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); @@ -45,7 +43,7 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { - if (!kvm->arch.tdp_mmu_enabled) + if (!tdp_enabled) return; /* Also waits for any queued work items. */ From patchwork Mon Aug 15 23:01:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C7DDC00140 for ; Tue, 16 Aug 2022 02:39:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233117AbiHPCjJ (ORCPT ); Mon, 15 Aug 2022 22:39:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233719AbiHPCis (ORCPT ); Mon, 15 Aug 2022 22:38:48 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0C715D122 for ; Mon, 15 Aug 2022 16:01:20 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-324989683fdso79495647b3.12 for ; Mon, 15 Aug 2022 16:01:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=312y1r9DCcSJg69qR46Ujpn+LlXoc7aI9JqL7JICd70=; b=mpSeVasYOQbfUNwE6J9yQ20WftzXqvE8cZzjRHK4RQGz7RavF1k6Bbk7iDFM7LEJqA W7/OCCTc54GpB+WxUj25VPx3PmL8DBx85TE2WZKMebRWT6pnxRqKBQg5rQqxXTooUMiA eqv2XUNnIeLqvejKGe+UYEUANQdez1ymh2fKXIgmhmkuDeEGROe/6DXWdcuyplfSnYYC cNg3DUsD1QydsFRMws3MFS6MJawMzW0Lxzyg3RDK1qv2kO6rWy6wCPYnG0qkZ6LjODL6 5guQXo8A97dijr55Ja+2zXAVF6EPL8XpgvZMzfYH/2a2tieJ/d/ATxhiUwlv1ViHTwBv 24xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=312y1r9DCcSJg69qR46Ujpn+LlXoc7aI9JqL7JICd70=; b=TclzsvA6I28j6+3sbNIPY0UEa9FiXLFCrUeldg6GyJ0aFvnSLmE9cIXEgdqwavN1KR 28JdMp6cnR7Bcw6Q1QKnKA1v/LR9b71wqKQq6dLs/n0Uyk1vRxLsQksq2CecPrNqKs/4 n8hbiXs1LjxCoWA/v7UvIdrNcwMiYG5/lkrloZwOWYpGZZvucLYLZDr6cTSz0v/8LJ4j turfN80ZUJ3oXMlQlZnaQzwl0OWeRjgp7HnrBqlYVJfwjCIPCeQv2T27xAzN58ydqhv7 mGVsFzRk9uUXB27H8NfD0NW6QO19e/kIUV7m72omQ75V4kSSxC+VO1Oatq8wP0IbprMW QiMQ== X-Gm-Message-State: ACgBeo2RWrB+Oqd7r57FEJd1/kkEnMs4xWU++25lxyOFMMEOg5S5IVJ7 /DooKD95o3jk5YjWnMvUJxb8bAsR2miUNQ== X-Google-Smtp-Source: AA6agR4/090qsVeE3DKFoiC5NO46mHWopxigcGkI/MIMAGp0lnHoX4ypkTjt8irb3Mz1nmn8qlCzC0/tuqET3Q== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:8145:0:b0:323:26f5:2c8a with SMTP id r66-20020a818145000000b0032326f52c8amr14288041ywf.261.1660604480062; Mon, 15 Aug 2022 16:01:20 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:04 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 3/9] KVM: x86/mmu: Consolidate mmu_seq calculations in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Calculate mmu_seq during kvm_faultin_pfn() and stash it in struct kvm_page_fault. The eliminates duplicate code and reduces the amount of parameters needed for is_page_fault_stale(). Note, the smp_rmb() needs a comment but that is out of scope for this commit which is pure code motion. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 ++++++-------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 6 +----- 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8c293a88d923..af1b7e7fb4fb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4127,6 +4127,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) struct kvm_memory_slot *slot = fault->slot; bool async; + fault->mmu_seq = vcpu->kvm->mmu_notifier_seq; + smp_rmb(); + /* * Retry the page fault if the gfn hit a memslot that is being deleted * or moved. This ensures any existing SPTEs for the old memslot will @@ -4183,7 +4186,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * root was invalidated by a memslot update or a relevant mmu_notifier fired. */ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault, int mmu_seq) + struct kvm_page_fault *fault) { struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root.hpa); @@ -4203,14 +4206,12 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_notifier_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); - - unsigned long mmu_seq; int r; fault->gfn = fault->addr >> PAGE_SHIFT; @@ -4227,9 +4228,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) return r; - mmu_seq = vcpu->kvm->mmu_notifier_seq; - smp_rmb(); - r = kvm_faultin_pfn(vcpu, fault); if (r != RET_PF_CONTINUE) return r; @@ -4245,7 +4243,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else write_lock(&vcpu->kvm->mmu_lock); - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; r = make_mmu_pages_available(vcpu); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..1c0a1e7c796d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -221,6 +221,7 @@ struct kvm_page_fault { struct kvm_memory_slot *slot; /* Outputs of kvm_faultin_pfn. */ + unsigned long mmu_seq; kvm_pfn_t pfn; hva_t hva; bool map_writable; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f5958071220c..a199db4acecc 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -791,7 +791,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault { struct guest_walker walker; int r; - unsigned long mmu_seq; bool is_self_change_mapping; pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_code); @@ -838,9 +837,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else fault->max_level = walker.level; - mmu_seq = vcpu->kvm->mmu_notifier_seq; - smp_rmb(); - r = kvm_faultin_pfn(vcpu, fault); if (r != RET_PF_CONTINUE) return r; @@ -871,7 +867,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = RET_PF_RETRY; write_lock(&vcpu->kvm->mmu_lock); - if (is_page_fault_stale(vcpu, fault, mmu_seq)) + if (is_page_fault_stale(vcpu, fault)) goto out_unlock; r = make_mmu_pages_available(vcpu); From patchwork Mon Aug 15 23:01:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52360C00140 for ; Tue, 16 Aug 2022 02:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233845AbiHPCjP (ORCPT ); Mon, 15 Aug 2022 22:39:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233745AbiHPCiv (ORCPT ); Mon, 15 Aug 2022 22:38:51 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEB865D117 for ; Mon, 15 Aug 2022 16:01:22 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3238ce833beso79224127b3.11 for ; Mon, 15 Aug 2022 16:01:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=M+nfQilWC9cZraYcqyhIFW5aSVPILsZzzMM4Mf2MT1M=; b=SBZufQamGMqBF1EDLReFItt49RuzfGEjvIYsosojb248aUvc89WQrrhY7WbeXdfCFN buXRfIMz0yiDC1XIFSC1Gp26xWr6aE9OWUCYjKkUF4uJmlsss5lasJ9cdRc04zNIW6Sq RGfw3hhQ0/PP8HJB8kGihoMT9dS0WT/PVb4pEunfGwghqX6drw1NiennQ+zshN4m9dVS M/0zFlAMbMAchbf9vHxInQpygOMlEsIyFlRUWuuT0Mpu2QDzwDh0ibm9dSAmTTNrAX5r 6GT9Y/oaafFNvjFskjNugUJMBv5PeIU7akRtja2v5lEWSqFKDCfCvKVGwXMQLM4459bB +8Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=M+nfQilWC9cZraYcqyhIFW5aSVPILsZzzMM4Mf2MT1M=; b=xO9U/p0KHF9wHMfOzaTKFFOmp3zb61yFtFLciGuY30brAdZlXBUGbCI9d0vH5tzF9+ AXL2V0CtKM9PYUwdOhq7zBCq7VxnVKBbnTRie24e9R1o+f/OHQ5rkjrfMip2GRVzqzDy 3QZ+Aov2iCJh4ifpfpMaFR6cuMyz7c8ZJ58+uBCVu2goGG6q3ylJE/v+3ZBrtvCdjDKk dj8X0ifAGKoZAEjTGqU2rdvAR/TRvY24Py/VUewep7ls8PGpoe599Aglj26HX9SSGKkL fP8XDp3bhnazjThjPGDKkj2csLX7PB6Q7d0w2vAo3xn7NV9xUn4zBIGgMbY/rzrZOKw3 BltQ== X-Gm-Message-State: ACgBeo1Q9PNUnnEH4CjtfITUNLHuuyiPxaSzHt69iHtEISqS8SO5joYH DBe9p3Fx6QF921mY30v51FnZ+FQTaIaCQA== X-Google-Smtp-Source: AA6agR7o4SHIupdgy/9JSRVpr8fjneKxjH5JWiDFGcgqDkuNQeIqUtIUgstuol60xaLZzdHCSm6eiEjTdUF0qQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:148d:0:b0:32a:8e40:cdc0 with SMTP id 135-20020a81148d000000b0032a8e40cdc0mr14305150ywu.425.1660604482208; Mon, 15 Aug 2022 16:01:22 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:05 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 4/9] KVM: x86/mmu: Rename __direct_map() to nonpaging_map() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename __direct_map() to nonpaging_map() since it is only used to handle faults for non-paging guests on TDP-disabled hosts. Opportunistically make some trivial cleanups to comments that had to be modified anyway since they mentioned __direct_map(). Specifically, use "()" when referring to functions, and include kvm_tdp_mmu_map() among the various callers of disallowed_hugepage_adjust(). No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index af1b7e7fb4fb..3e03407f1321 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3072,11 +3072,11 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ is_shadow_present_pte(spte) && !is_large_pte(spte)) { /* - * A small SPTE exists for this pfn, but FNAME(fetch) - * and __direct_map would like to create a large PTE - * instead: just force them to go down another level, - * patching back for them into pfn the next 9 bits of - * the address. + * A small SPTE exists for this pfn, but FNAME(fetch), + * nonpaging_map(), and kvm_tdp_mmu_map() would like to create a + * large PTE instead: just force them to go down another level, + * patching back for them into pfn the next 9 bits of the + * address. */ u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) - KVM_PAGES_PER_HPAGE(cur_level - 1); @@ -3085,7 +3085,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ } } -static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int nonpaging_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_shadow_walk_iterator it; struct kvm_mmu_page *sp; @@ -4253,7 +4253,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (is_tdp_mmu_fault) r = kvm_tdp_mmu_map(vcpu, fault); else - r = __direct_map(vcpu, fault); + r = nonpaging_map(vcpu, fault); out_unlock: if (is_tdp_mmu_fault) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1c0a1e7c796d..f65892c2fdeb 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -198,7 +198,7 @@ struct kvm_page_fault { /* * Maximum page size that can be created for this fault; input to - * FNAME(fetch), __direct_map and kvm_tdp_mmu_map. + * FNAME(fetch), nonpaging_map() and kvm_tdp_mmu_map(). */ u8 max_level; From patchwork Mon Aug 15 23:01:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD1D4C00140 for ; Tue, 16 Aug 2022 02:39:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231434AbiHPCjD (ORCPT ); Mon, 15 Aug 2022 22:39:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233679AbiHPCir (ORCPT ); Mon, 15 Aug 2022 22:38:47 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2831F83069 for ; Mon, 15 Aug 2022 16:01:25 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33352499223so9113117b3.8 for ; Mon, 15 Aug 2022 16:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=AEO9GH/YZ9oUnLZkd+DVKYY1R95ljJTtzDWVCxIGzlc=; b=q2DmOmfI/yKmjX6XHCy+FxeRXO62oLQbt/9lsXPZmf0VwtdGNmTq20/vcIKEJyeiWv Ji+SVOyNZbz8UsWlVMARp6at8PrDJ/hQQs7QcLPqu6QqucLySGOukHqwg4EZCRYqwyCt PrmH+RQL1ds/gkmAvC3CdJyZ3LGK3FJh26J6NMCtQJka3WeoAeLbaCPXp3lAF4Gtptjq 1HfwdOfkjjJYwMJn0gwk8lWiU6b1pigQrjRIr3WSAn3gOSWyHX3wDyL0XvPMJdeLT0xl Q01kAAB94vVU2YizEgO9d9maLt7uEQOoChUoEhipY1e2YUfNL3ygMOgQC1ULQnTMR1CE ZKWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=AEO9GH/YZ9oUnLZkd+DVKYY1R95ljJTtzDWVCxIGzlc=; b=3uvx3hICIaFOm9ifqqperrpDLp7Atbr7e0+rUkTis4AmFaddDbgdnyvk+aV5vRsPxF 2or5p22DYJc2JIdHA8P1vbd70NlL2VtRP2Ng2Jb+e1sX4alXYPUGZK2Kc88tjoJ+taGP 10bUfqAXzdf8XS/YZb1+LCOUxQabgMyVKFyBL9XrbgYOsO0HnVk3YUAuyh0b8+IRRgFT kROTHI6HHAau0tboFQFb66RalZXMo9w95Qcz4CY9nbLKz/KqXP5TDpk+RW1v1eF9spsd maZNO6nbgvNfsXdYeC8PPJy5wMqAU1/fILHLim4RFeFBegSCTSFB3BXKRTmbU8U/MChp wOHw== X-Gm-Message-State: ACgBeo3NfTPpC02yfh++yKTzH4bEFMW7c2Hga2ikK2+GH3ulUSA4jVsi m+bLX+kgjX3ZgDCFjFgoKmrGdxxa8tJQqg== X-Google-Smtp-Source: AA6agR7EVG32s0u8XvvKYGGRUdS5BS37LnpdVWVNHgXmNPbn5mwp5jjMva+zKd/V7GnLOmVZk3GNlZ+4/p8chA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:e401:0:b0:66e:280a:98cd with SMTP id b1-20020a25e401000000b0066e280a98cdmr12850357ybh.540.1660604484454; Mon, 15 Aug 2022 16:01:24 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:06 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 5/9] KVM: x86/mmu: Separate TDP and non-paging fault handling From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the page fault handling for TDP faults and non-paging faults. This creates some duplicate code in the short term, but makes each routine simpler to read by eliminating branches and enables future cleanups by allowing the two paths to diverge. Signed-off-by: David Matlack Reported-by: kernel test robot --- arch/x86/kvm/mmu/mmu.c | 77 +++++++++++++++++++++++++++--------------- 1 file changed, 50 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3e03407f1321..182f9f417e4e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4209,11 +4209,15 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, mmu_notifier_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva); } -static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +static int nonpaging_page_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); int r; + pgprintk("%s: gva %lx error %x\n", __func__, fault->addr, fault->error_code); + + /* This path builds a PAE pagetable, we can map 2mb pages at maximum. */ + fault->max_level = PG_LEVEL_2M; fault->gfn = fault->addr >> PAGE_SHIFT; fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); @@ -4237,11 +4241,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return r; r = RET_PF_RETRY; - - if (is_tdp_mmu_fault) - read_lock(&vcpu->kvm->mmu_lock); - else - write_lock(&vcpu->kvm->mmu_lock); + write_lock(&vcpu->kvm->mmu_lock); if (is_page_fault_stale(vcpu, fault)) goto out_unlock; @@ -4250,30 +4250,14 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; - if (is_tdp_mmu_fault) - r = kvm_tdp_mmu_map(vcpu, fault); - else - r = nonpaging_map(vcpu, fault); + r = nonpaging_map(vcpu, fault); out_unlock: - if (is_tdp_mmu_fault) - read_unlock(&vcpu->kvm->mmu_lock); - else - write_unlock(&vcpu->kvm->mmu_lock); + write_unlock(&vcpu->kvm->mmu_lock); kvm_release_pfn_clean(fault->pfn); return r; } -static int nonpaging_page_fault(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) -{ - pgprintk("%s: gva %lx error %x\n", __func__, fault->addr, fault->error_code); - - /* This path builds a PAE pagetable, we can map 2mb pages at maximum. */ - fault->max_level = PG_LEVEL_2M; - return direct_page_fault(vcpu, fault); -} - int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len) { @@ -4309,6 +4293,11 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault); int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + int r; + + fault->gfn = fault->addr >> PAGE_SHIFT; + fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype @@ -4324,14 +4313,48 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) { for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) { int page_num = KVM_PAGES_PER_HPAGE(fault->max_level); - gfn_t base = (fault->addr >> PAGE_SHIFT) & ~(page_num - 1); + gfn_t base = fault->gfn & ~(page_num - 1); if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num)) break; } } - return direct_page_fault(vcpu, fault); + if (page_fault_handle_page_track(vcpu, fault)) + return RET_PF_EMULATE; + + r = fast_page_fault(vcpu, fault); + if (r != RET_PF_INVALID) + return r; + + r = mmu_topup_memory_caches(vcpu, false); + if (r) + return r; + + r = kvm_faultin_pfn(vcpu, fault); + if (r != RET_PF_CONTINUE) + return r; + + r = handle_abnormal_pfn(vcpu, fault, ACC_ALL); + if (r != RET_PF_CONTINUE) + return r; + + r = RET_PF_RETRY; + read_lock(&vcpu->kvm->mmu_lock); + + if (is_page_fault_stale(vcpu, fault)) + goto out_unlock; + + r = make_mmu_pages_available(vcpu); + if (r) + goto out_unlock; + + r = kvm_tdp_mmu_map(vcpu, fault); + +out_unlock: + read_unlock(&vcpu->kvm->mmu_lock); + kvm_release_pfn_clean(fault->pfn); + return r; } static void nonpaging_init_context(struct kvm_mmu *context) From patchwork Mon Aug 15 23:01:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB515C19F2C for ; Tue, 16 Aug 2022 05:54:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231166AbiHPFyG (ORCPT ); Tue, 16 Aug 2022 01:54:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229791AbiHPFxw (ORCPT ); Tue, 16 Aug 2022 01:53:52 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FC0F8306E for ; Mon, 15 Aug 2022 16:01:27 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-32851d0f8beso79665527b3.22 for ; Mon, 15 Aug 2022 16:01:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=x0zAkD4YhDSAmJJLXBaHHVhHveKV6FeZwfvyZmVn5X0=; b=YKGv7atx+z0swJ45UbcQNtvE5xMGQxflFCn4o6NmqdzeLLedpsYgGOnVIwkryg5QXj Cf0i8FDG2rYCn7PWKToyf2Qq7oVLbLhzo0cEZUgxzqMyKGaFr3Krp1JOI4jqABCff1vY DSbUFGLtp4hqnJOu2KJJt+tEFsPDjeMSwlBILgO/9iy4nBdIJxtLHA+ials3vP8IixF1 4Lg68EgJHfW31fvAlixFWgk7hPQijjAog9/+JpXPfpCXhaqlQuMWxiqipmu1ipsVEexP dgRbK5EwmT86pIVeGuSCbuzZU0+Cw4HAvU/dCWHdyK7iCYjqgBhIllTG4+2FtXUgjWy1 hJyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=x0zAkD4YhDSAmJJLXBaHHVhHveKV6FeZwfvyZmVn5X0=; b=W0+6Z/Vnvx7o9d+/AY3v5VwvXolmms04bGkg2go/6p//I8KYBCw+doSiePPJ25dbwJ wh+R2BEBoik0bNtngVzPpzwkoeBfEhOH6Z9JmJSJV8vZDDjAUBBJKKE6CPGbcY+qS74b Gl3U2qZi0ZHgY0lVIa5SqAzKO0MERl0I2FRfG7yfBPsXhYzixUpZWUSngkB1EkqaIjn1 99hBU6qAZiAUpcQdzfoX68tHClJjqgXAvs1UOErne6B+jltp+6/nsaiLNIv1lmj8dVKr YBk03ithIks9kibebB8HT2T9GgCv8pXhcg0W429cKa3+Cp5ZhY/vFQpVEPFbP4El3vfy KfvQ== X-Gm-Message-State: ACgBeo1vP8qil1RnpWIe4lXbNswjB+2YIMXjS9XwoymZKx/oE4TsDnnU vX5pt1PhEL29nxAY0BDwvEDFqKq5GDg1SA== X-Google-Smtp-Source: AA6agR5rUC/TtKyQ+gsohEYcWg86cK/+z/iP5ITaqRkvK5f8p1jOa0hQFZj+qURvGh8+rzD+6+P2zKuQfs3fmg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:1a43:0:b0:330:b88:2537 with SMTP id a64-20020a811a43000000b003300b882537mr7590329ywa.15.1660604486573; Mon, 15 Aug 2022 16:01:26 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:07 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 6/9] KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU faults From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Stop calling make_mmu_pages_available() when handling TDP MMU faults. The TDP MMU does not participate in the "available MMU pages" tracking and limiting so calling this function is unnecessary work when handling TDP MMU faults. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 182f9f417e4e..6613ae387e1b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4345,10 +4345,6 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_page_fault_stale(vcpu, fault)) goto out_unlock; - r = make_mmu_pages_available(vcpu); - if (r) - goto out_unlock; - r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: From patchwork Mon Aug 15 23:01:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADD82C25B0F for ; Tue, 16 Aug 2022 05:54:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229791AbiHPFyH (ORCPT ); Tue, 16 Aug 2022 01:54:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229568AbiHPFxx (ORCPT ); Tue, 16 Aug 2022 01:53:53 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DC4183076 for ; Mon, 15 Aug 2022 16:01:29 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31f3959ba41so79219227b3.2 for ; Mon, 15 Aug 2022 16:01:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=ZJS0tbmGYAfYT3DCAVjNZ8U5TGUCgBRd96hfOlG3874=; b=oq0otXyTRC5dSymoYUkHzsUCka7UFl4ktCR5JWnOhEKaylkGbOsLgrWXXqZ+nui3GM pPEYW6VqFCW/DUd+wPm8JqgMJ5RBoHvSV9cE+5OSOpgku94BfXLWyAQ3T7HCpF/ajA3H 4Jh2FytDDCshHEN3rQLCaJUJQG6SItnv0yfru1YgbH8bywXYBEDAZlyRN4FgtQot9kK4 pe7qfpzGMPBAQmKNpJdvhRwHxxvjPJ07xxzpzMij7g08V/z72iMQEfkW1qcXh6DvgY99 NvC6QBokKjljT/jei/Jx1zjGKqD05OL0m+NCMP81DPTBmLHIxlb+5BhqNHcRIrtucQSa n48A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=ZJS0tbmGYAfYT3DCAVjNZ8U5TGUCgBRd96hfOlG3874=; b=WcCiNRPV/nClE0B/ROnNplm5n1nP9gFol9ej0lmNbsQs/7A7yIR3aBk0gqpFW89CdK jnVNyVgKWgzcqKGpvYk+dYIbVUJcRY2DVDZC2llo/honpMIzRh4gM2wKfRmJPYn6bipt tcDTqLy7A4wNaQnLnH8Euxvu+9GzYJJqAX/Dn5M5iJw576j/EEU8EDBxZkcV8U9VxhOo 2HFsqhWb2ki+fz+ecp9U3dUL87Bfe8/Ysw2+L/tT2pHltY9gAhEEplQNzdvYeLsDY2BP 7az6751xffuoHMhbnJpAv49HViiPHTgRfTU4gDdUsDMo/pFO46J/EuL1yyJcWGlIiICT S5xw== X-Gm-Message-State: ACgBeo0v0RtDxd2IpNz2H4OO33nH3+gNw5UppDU2q4ZINNGeuRwkh21Y DW8/pFrxzt2yc6s2HftGxued4lAVy4KErA== X-Google-Smtp-Source: AA6agR4WB20E3rHDe/f/HqpLsp8KVEVyTklWfw3GfmDTmer9e7n/WqldbqVVrkqg5WcxLBcEAzlR9CoHicUhHg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:6dc7:0:b0:328:3070:3dad with SMTP id i190-20020a816dc7000000b0032830703dadmr14369526ywc.247.1660604488669; Mon, 15 Aug 2022 16:01:28 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:08 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 7/9] KVM: x86/mmu: Handle "error PFNs" in kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Handle "error PFNs" directly in kvm_faultin_pfn() rather than relying on the caller to invoke handle_abnormal_pfn() after kvm_faultin_pfn(). Opportunistically rename kvm_handle_bad_page() to kvm_handle_error_pfn() to make it more consistent with e.g. is_error_pfn(). The reason for this change is to reduce the number of things being handled in handle_abnormal_pfn(), which is currently a grab bag for edge conditions (the other of which being updating the vCPU MMIO cache). No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6613ae387e1b..36960ea0d4ef 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3134,7 +3134,7 @@ static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct * send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); } -static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) +static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) { /* * Do not cache the mmio info caused by writing the readonly gfn @@ -3155,10 +3155,6 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { - /* The pfn is invalid, report the error! */ - if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_bad_page(vcpu, fault->gfn, fault->pfn); - if (unlikely(!fault->slot)) { gva_t gva = fault->is_tdp ? 0 : fault->addr; @@ -4144,7 +4140,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) fault->slot = NULL; fault->pfn = KVM_PFN_NOSLOT; fault->map_writable = false; - return RET_PF_CONTINUE; + goto out; } /* * If the APIC access page exists but is disabled, go directly @@ -4162,7 +4158,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) fault->write, &fault->map_writable, &fault->hva); if (!async) - return RET_PF_CONTINUE; /* *pfn has correct page already */ + goto out; /* *pfn has correct page already */ if (!fault->prefetch && kvm_can_do_async_pf(vcpu)) { trace_kvm_try_async_get_page(fault->addr, fault->gfn); @@ -4178,6 +4174,11 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, NULL, fault->write, &fault->map_writable, &fault->hva); + +out: + if (unlikely(is_error_pfn(fault->pfn))) + return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + return RET_PF_CONTINUE; } From patchwork Mon Aug 15 23:01:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FAFCC19F2C for ; Tue, 16 Aug 2022 05:54:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231259AbiHPFyL (ORCPT ); Tue, 16 Aug 2022 01:54:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229724AbiHPFxz (ORCPT ); Tue, 16 Aug 2022 01:53:55 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B92338307E for ; Mon, 15 Aug 2022 16:01:31 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31f5d66fcdeso80008927b3.21 for ; Mon, 15 Aug 2022 16:01:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=MB2NvgCLGG0Emc0fClW8tEIWowRIBHh6WAc+fwkMqOo=; b=fw3dsGJiizlT87PGi7CWs8waLh23OiU+/jFxQMDg05iObRfLw8gFerHxoeY3EHInoP KjD0V1AXeA1u5NRT3ykcR85cfK5LEHs6r3STW5pW+r0XTHNV31mICy0YKRdEuIR8BXRA i999d6OJEKFZ6F/PkzLj2oxn0ALU5R1ETQTDgjgGUigcsOjrHLa2jYGO8I2EmLEjLxZs TbcXPVC1HzieuQQSLtYmt7DfKbTmbQlck0G006jia1cOhSxhAE/pFXUXYYlk/yOgOi/b 1LQGn4N3jIYqgkR4Zb1dwPAzPJ15+S2I8WKZWrN0h1ukhLvLe3hTK6au7V4pgzk+QwHJ hAgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=MB2NvgCLGG0Emc0fClW8tEIWowRIBHh6WAc+fwkMqOo=; b=w4S2rK5AQ0sWtj85YeQTt3HQAbhD4MdlFyDM0cVpw/uCZhSJ1jQLPrnBzeUQYscJMY mChY/sr2MZA3cW0Ne0afoLYLjSkjWVudgpUeSqPMUUOzihbcoegEYwKJgo3+aR7Ar/se rhbulVpuoi+QX+pd9GV+Np+EPJhOO4BBDHsGsYMFGjns1QCgqnvnFd2yrpTqX8paith2 WXk37RRwF2kgZDWWcYQgKTVmuAhEGuhjRcu39wkV5w8np/5V1tgmTly9McqSdTqYuY+O Xgb/IMzxcMTQumxIOcsUTxvSPQSlgo+aNFv+E+uvuOqIyi5ZXge2DPcTdKaPCRraqFdr mO2A== X-Gm-Message-State: ACgBeo11UImXpEHcFAz9WdL4lCh+0EgKBbSx5/CtTwsg5jVgbo5vkUb6 HBfvfIDGpp9BQ/kaf7tQVouDTkM54aA+sA== X-Google-Smtp-Source: AA6agR7xXoeUjLWPnarbCguv+OcBx++tQ2t5hCUrN4FcKn9SKMUA622h/1Vmjfe1kjw21jphho2EY1Q/3y3eaw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6902:1245:b0:676:dd76:e07f with SMTP id t5-20020a056902124500b00676dd76e07fmr13909816ybu.333.1660604491042; Mon, 15 Aug 2022 16:01:31 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:09 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 8/9] KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON handling From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pass the kvm_page_fault struct down to kvm_handle_error_pfn() to avoid a memslot lookup when handling KVM_PFN_ERR_HWPOISON. Opportunistically move the gfn_to_hva_memslot() call and @current down into kvm_send_hwpoison_signal() to cut down on line lengths. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 36960ea0d4ef..47f4d1e81db1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3129,23 +3129,25 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return ret; } -static void kvm_send_hwpoison_signal(unsigned long address, struct task_struct *tsk) +static void kvm_send_hwpoison_signal(struct kvm_memory_slot *slot, gfn_t gfn) { - send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, PAGE_SHIFT, tsk); + unsigned long hva = gfn_to_hva_memslot(slot, gfn); + + send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current); } -static int kvm_handle_error_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) +static int kvm_handle_error_pfn(struct kvm_page_fault *fault) { /* * Do not cache the mmio info caused by writing the readonly gfn * into the spte otherwise read access on readonly gfn also can * caused mmio page fault and treat it as mmio access. */ - if (pfn == KVM_PFN_ERR_RO_FAULT) + if (fault->pfn == KVM_PFN_ERR_RO_FAULT) return RET_PF_EMULATE; - if (pfn == KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(kvm_vcpu_gfn_to_hva(vcpu, gfn), current); + if (fault->pfn == KVM_PFN_ERR_HWPOISON) { + kvm_send_hwpoison_signal(fault->slot, fault->gfn); return RET_PF_RETRY; } @@ -4177,7 +4179,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) out: if (unlikely(is_error_pfn(fault->pfn))) - return kvm_handle_error_pfn(vcpu, fault->gfn, fault->pfn); + return kvm_handle_error_pfn(fault); return RET_PF_CONTINUE; } From patchwork Mon Aug 15 23:01:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12944313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2C0FC25B0F for ; Tue, 16 Aug 2022 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229992AbiHPFyO (ORCPT ); Tue, 16 Aug 2022 01:54:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229718AbiHPFxz (ORCPT ); Tue, 16 Aug 2022 01:53:55 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E831F857C5 for ; Mon, 15 Aug 2022 16:01:33 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-32e6a92567bso80299487b3.10 for ; Mon, 15 Aug 2022 16:01:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=dbEjZYNN1KQEmwyUXUywUn8T3TVLon0A4zT83JcFFGQ=; b=iTa2SBI4jqIaJzb1IVwW5rpp0RYxXUeqdh/3doABJThVgktMEZz4r4zk87GdMrzS/e omR31um4/6YOAIgLXa5sdKcgp/Mgac3hkFlAHGeCMI1LH81+b5WwXvM/H/OtM3ppYJBn 7H+SlLCsKqVjMVYgqYZjHYSKFf/Bj6Cg2M0396ldSlJJJdUUnEKAD+qvbI4eeaky60vh PpdOTbwltpVR+PreJ1bkl4twWpV2GYKWjj9xsveRFYO+xpkQ5X9Hqusog81bD39+ZMIy g8QwCLLC3wo12f0ZPTIc4ETKOE36PyhImKaEUABBf3/XMhnfV5OIKU7iiYJY0dPoGF5B P5rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=dbEjZYNN1KQEmwyUXUywUn8T3TVLon0A4zT83JcFFGQ=; b=PV9OvEDrULsSN75brfeAiYa/7layejSMuoBy1s+BcxaJuc7cSqIzxXlu8prCYNSk6K e3WEf/wqutDaNNy1GgeZsWTJ0pat+rUbg7FPLmEjV1ol0wVqQq35ONQBY6W5rZJYihvF IvNeHbIjFpeypyDLf6+0ey34/X5/58DM+hHz0AbHk9F8SaWWH0wefwUVtsoanALTI5y2 orl0CoQx0r4xZYdTUUfHIMV2wngpt4CpAl8qQCZpl2Sh4A20zPDNRvff94b+OtQdJE3S FDhbmLrjKm/JSKqJbfMR3YGXWati5OOlGCRLd1V3IA6YgHdbwmAaiBNyK3PF/Pl0GphL TgSg== X-Gm-Message-State: ACgBeo37doT8+OuW9z4MtUtZZETN1HzUj8t87cS61SlCRy32zu1R6iwz 5nTeQLA5JSvAT+ZanSaItfde3hfuxasiGA== X-Google-Smtp-Source: AA6agR4bkKLylt1V24e8eZ7xq6k7O9vt2/T7GuahR5Vw5x4165J0dgka54N8iVU81xZc8OpCV/1kCJWXn9sUhA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:5288:0:b0:332:a114:faa1 with SMTP id g130-20020a815288000000b00332a114faa1mr2685541ywb.26.1660604493247; Mon, 15 Aug 2022 16:01:33 -0700 (PDT) Date: Mon, 15 Aug 2022 16:01:10 -0700 In-Reply-To: <20220815230110.2266741-1-dmatlack@google.com> Message-Id: <20220815230110.2266741-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220815230110.2266741-1-dmatlack@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 9/9] KVM: x86/mmu: Try to handle no-slot faults during kvm_faultin_pfn() From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , Borislav Petkov , "Paul E. McKenney" , Kees Cook , Peter Zijlstra , Andrew Morton , Randy Dunlap , Damien Le Moal , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Try to handle faults on GFNs that do not have a backing memslot during kvm_faultin_pfn(), rather than relying on the caller to call handle_abnormal_pfn() right after kvm_faultin_pfn(). This reduces all of the page fault paths by eliminating duplicate code. Opportunistically tweak the comment about handling gfn > host.MAXPHYADDR to reflect that the effect of returning RET_PF_EMULATE at that point is to avoid creating an MMIO SPTE for such GFNs. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 55 +++++++++++++++++----------------- arch/x86/kvm/mmu/paging_tmpl.h | 4 --- 2 files changed, 27 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 47f4d1e81db1..741b92b1f004 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3154,28 +3154,32 @@ static int kvm_handle_error_pfn(struct kvm_page_fault *fault) return -EFAULT; } -static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, - unsigned int access) +static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + unsigned int access) { - if (unlikely(!fault->slot)) { - gva_t gva = fault->is_tdp ? 0 : fault->addr; + gva_t gva = fault->is_tdp ? 0 : fault->addr; - vcpu_cache_mmio_info(vcpu, gva, fault->gfn, - access & shadow_mmio_access_mask); - /* - * If MMIO caching is disabled, emulate immediately without - * touching the shadow page tables as attempting to install an - * MMIO SPTE will just be an expensive nop. Do not cache MMIO - * whose gfn is greater than host.MAXPHYADDR, any guest that - * generates such gfns is running nested and is being tricked - * by L0 userspace (you can observe gfn > L1.MAXPHYADDR if - * and only if L1's MAXPHYADDR is inaccurate with respect to - * the hardware's). - */ - if (unlikely(!enable_mmio_caching) || - unlikely(fault->gfn > kvm_mmu_max_gfn())) - return RET_PF_EMULATE; - } + vcpu_cache_mmio_info(vcpu, gva, fault->gfn, + access & shadow_mmio_access_mask); + + /* + * If MMIO caching is disabled, emulate immediately without + * touching the shadow page tables as attempting to install an + * MMIO SPTE will just be an expensive nop. + */ + if (unlikely(!enable_mmio_caching)) + return RET_PF_EMULATE; + + /* + * Do not create an MMIO SPTE for a gfn greater than host.MAXPHYADDR, + * any guest that generates such gfns is running nested and is being + * tricked by L0 userspace (you can observe gfn > L1.MAXPHYADDR if and + * only if L1's MAXPHYADDR is inaccurate with respect to the + * hardware's). + */ + if (unlikely(fault->gfn > kvm_mmu_max_gfn())) + return RET_PF_EMULATE; return RET_PF_CONTINUE; } @@ -4181,6 +4185,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (unlikely(is_error_pfn(fault->pfn))) return kvm_handle_error_pfn(fault); + if (unlikely(!fault->slot)) + return kvm_handle_noslot_fault(vcpu, fault, ACC_ALL); + return RET_PF_CONTINUE; } @@ -4239,10 +4246,6 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, if (r != RET_PF_CONTINUE) return r; - r = handle_abnormal_pfn(vcpu, fault, ACC_ALL); - if (r != RET_PF_CONTINUE) - return r; - r = RET_PF_RETRY; write_lock(&vcpu->kvm->mmu_lock); @@ -4338,10 +4341,6 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (r != RET_PF_CONTINUE) return r; - r = handle_abnormal_pfn(vcpu, fault, ACC_ALL); - if (r != RET_PF_CONTINUE) - return r; - r = RET_PF_RETRY; read_lock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index a199db4acecc..cf19227e842c 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -841,10 +841,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r != RET_PF_CONTINUE) return r; - r = handle_abnormal_pfn(vcpu, fault, walker.pte_access); - if (r != RET_PF_CONTINUE) - return r; - /* * Do not change pte_access if the pfn is a mmio page, otherwise * we will cache the incorrect access into mmio spte.