From patchwork Wed Mar 31 21:08:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06F3DC43460 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0EB86108B for ; Wed, 31 Mar 2021 21:09:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232763AbhCaVJ2 (ORCPT ); Wed, 31 Mar 2021 17:09:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232698AbhCaVJE (ORCPT ); Wed, 31 Mar 2021 17:09:04 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEDE0C061574 for ; Wed, 31 Mar 2021 14:09:03 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id c1so2347755qke.8 for ; Wed, 31 Mar 2021 14:09:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=shAgYfqF+cypy0/KRQK7wirhgcU0wHOuMRYUsNPex7o=; b=rQ0sJ7kmvyis0+Nu0wXg39UJkTyoKd5AR460peD4kAaJQY7hv028MU+6T3yUeV7aGY KsRRnH4PJIN6B5U2WoY/utuUlgrCKLdnBY/0FLbP34F2KCXkwWEj3Wsdh+BNM/mMvZXF mLLiV4x/eZ1tdOxm3ODWvVGalustQiSQfcgDLGrt67EVJ9Zm1xzN/oyNlrWxw2jfTHTo O42JEKWWBMt6c6guXYYqM9G4eFUP4Qbx0lwB87CRa8ggm/QvwDgbb2fQERyrE4x1jgkL oGO++kcbGEYdFbrjkgFIKt10dEKK5q+ZSrMX7qDX3gp+7nV6/syHZSQYlSpaOt2jWu8N 1eBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=shAgYfqF+cypy0/KRQK7wirhgcU0wHOuMRYUsNPex7o=; b=cW5QQHUM8yeL2jEoTRIfwDZSiebE6ADY78FGC/S73URpk+7VcGfF3IS/PYPF/QP1LF MZ2xIdZLQmqjfIXpEVvb93OlY6Ix8vj/DGg7C39Tg3W+sOtNjSVx7l59CNk5WQUxhlRT fzdI17uynF77Im/vY/bi2c6DqUqg7XctsIediFEu0bsx/zhWCX4N+jtixqQT90NB9m/q Gr0Vx8JyVJpTbIWNVUk8cBgwGSWHWdjQtRsB0e0IJ3N5AoijtoE2qTf1/XD2cjTkVMus QxUq4ftovVrcc7eIfqht5w2xP2Br6IltOwt9SKEUq3LC91NinaaDf13+g8XfehZ9W0+C QPnQ== X-Gm-Message-State: AOAM5316CYih+p73N2Orw/ScKz7rogqhLNmP/q65AUImGCfRseK96CZ3 68H/OF1/QmfeArxAI5l18FYERWntXo6/ X-Google-Smtp-Source: ABdhPJzHLA2UAcsevKJygjkqztrpxd2hRINkIpoVNWxeuBp/FZPAb45lJOMhylWdILYORIDxrpW7yvxIyNEH X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:ad4:5a14:: with SMTP id ei20mr4994453qvb.1.1617224942887; Wed, 31 Mar 2021 14:09:02 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:29 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-2-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 01/13] KVM: x86/mmu: Re-add const qualifier in kvm_tdp_mmu_zap_collapsible_sptes From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_tdp_mmu_zap_collapsible_sptes unnecessarily removes the const qualifier from its memlsot argument, leading to a compiler warning. Add the const annotation and pass it to subsequent functions. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 10 +++++----- arch/x86/kvm/mmu/mmu_internal.h | 5 +++-- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- include/linux/kvm_host.h | 2 +- 5 files changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c6ed633594a2..f75cbb0fcc9c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -715,8 +715,7 @@ static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) * handling slots that are not large page aligned. */ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn, - struct kvm_memory_slot *slot, - int level) + const struct kvm_memory_slot *slot, int level) { unsigned long idx; @@ -2736,7 +2735,7 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep) } static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, - struct kvm_memory_slot *slot) + const struct kvm_memory_slot *slot) { unsigned long hva; pte_t *pte; @@ -2762,8 +2761,9 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, return level; } -int kvm_mmu_max_mapping_level(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t pfn, int max_level) +int kvm_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, gfn_t gfn, + kvm_pfn_t pfn, int max_level) { struct kvm_lpage_info *linfo; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e03267e93459..fc88f62d7bd9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -156,8 +156,9 @@ enum { #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) #define SET_SPTE_SPURIOUS BIT(2) -int kvm_mmu_max_mapping_level(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t pfn, int max_level); +int kvm_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, gfn_t gfn, + kvm_pfn_t pfn, int max_level); int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, int max_level, kvm_pfn_t *pfnp, bool huge_page_disallowed, int *req_level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f2c335854afb..6d4f4e305163 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1268,7 +1268,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, */ static void zap_collapsible_spte_range(struct kvm *kvm, struct kvm_mmu_page *root, - struct kvm_memory_slot *slot) + const struct kvm_memory_slot *slot) { gfn_t start = slot->base_gfn; gfn_t end = start + slot->npages; @@ -1309,7 +1309,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, * be replaced by large mappings, for GFNs within the slot. */ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - struct kvm_memory_slot *slot) + const struct kvm_memory_slot *slot) { struct kvm_mmu_page *root; int root_as_id; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 3b761c111bff..683d1d69c8c8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -34,7 +34,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, gfn_t gfn, unsigned long mask, bool wrprot); void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - struct kvm_memory_slot *slot); + const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1b65e7204344..74e56e8673a6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1116,7 +1116,7 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn) } static inline unsigned long -__gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn) +__gfn_to_hva_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE; } From patchwork Wed Mar 31 21:08:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E7FEC43461 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6A49610CB for ; Wed, 31 Mar 2021 21:09:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232801AbhCaVJ3 (ORCPT ); Wed, 31 Mar 2021 17:09:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbhCaVJG (ORCPT ); Wed, 31 Mar 2021 17:09:06 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F36EBC061574 for ; Wed, 31 Mar 2021 14:09:05 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id p64so2082769pga.10 for ; Wed, 31 Mar 2021 14:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5cGMGQJTbkr5kU0eiDceomPmZqJS09u6WPxL5yh8vXM=; b=YuSbzqubyPg9lGhelarJ0gPJaiq9A7VtEK4raWqxa3kluRenhZ/sDnwtMZcgJAC01h a2HY1mt+bol9Jp0IBrZcRbmbogMxHXEk3oM6GimEh4DIxKPaqsHacH3bbc7rXdJ5CZpu +Cha9JokZYmBBPz7Hs6sfDRv/X7Sj/TkG37YJuC5E1SpQD+HxtybpDMfil+mC3dQRVam XzAJNpYUs5dBBRX5vXOaUjlQuk1eTpvhlCG8Irdi2zstVh+91YWrEyMZ4D40hsU1oZTT g8DfzEntCefXyIV0ov6nZQYr4xxqWbUUDVzoHLfIBydfEiJWl7OhuyKIUCv+VtIDZhas bmyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5cGMGQJTbkr5kU0eiDceomPmZqJS09u6WPxL5yh8vXM=; b=t07WyPKUnUbBLZIf9vkjwul5E9F2rSUPAyfFLQuEhei0VAxU1Y4aduRTfOr6BstMcs zV3m8JOaaTuvT/M3YK9uqHx3ZFzNL/juGXp3br5srFivJvALFB1jkUeXOohDH4Ga/QIB aeR/1h83VZi0Ay4ncFWCR/zKYjz8CYWcvw/fqK6HkkHNp+JwXFNPFJEPUVq7u/JMsbE4 3iUssp78eKQ/ETCfrMfXSw5PQYo6ou4Oi2BjBGQrzI4dCTsjJL2KRT3uyUbypkoOdRpo Xln9UmjIUg72kxpYimfyRlRadgrXI3syMIG97T7TKMoi7M+MwJtLNbaTthP6xU+IhoNU JlOw== X-Gm-Message-State: AOAM531f0N+F29aGUvIHc8fGZ6AKltES5cFSUp10XSZO2rWe7hcUy92r RD3bqGq9ZbLjGjI5QbHkDi1VWySTi3R+ X-Google-Smtp-Source: ABdhPJxT/t5xzDWAujo721CFZ5zRE1k8mgJcTaGSxm7XXc2GXJdaMMuhNAJD5r0d6AJXVQ/a03h5Wf12Rpwz X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a17:902:b182:b029:e6:5e:f2ce with SMTP id s2-20020a170902b182b02900e6005ef2cemr4836767plr.50.1617224945456; Wed, 31 Mar 2021 14:09:05 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:30 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-3-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 02/13] KVM: x86/mmu: Move kvm_mmu_(get|put)_root to TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDP MMU is almost the only user of kvm_mmu_get_root and kvm_mmu_put_root. There is only one use of put_root in mmu.c for the legacy / shadow MMU. Open code that one use and move the get / put functions to the TDP MMU so they can be extended in future commits. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 10 ++++------ arch/x86/kvm/mmu/mmu_internal.h | 16 ---------------- arch/x86/kvm/mmu/tdp_mmu.c | 6 +++--- arch/x86/kvm/mmu/tdp_mmu.h | 18 ++++++++++++++++++ 4 files changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f75cbb0fcc9c..618cc011f446 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3154,12 +3154,10 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK); - if (kvm_mmu_put_root(kvm, sp)) { - if (is_tdp_mmu_page(sp)) - kvm_tdp_mmu_free_root(kvm, sp); - else if (sp->role.invalid) - kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); - } + if (is_tdp_mmu_page(sp) && kvm_tdp_mmu_put_root(kvm, sp)) + kvm_tdp_mmu_free_root(kvm, sp); + else if (!--sp->root_count && sp->role.invalid) + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); *root_hpa = INVALID_PAGE; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index fc88f62d7bd9..788dcf77c957 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -118,22 +118,6 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); -static inline void kvm_mmu_get_root(struct kvm *kvm, struct kvm_mmu_page *sp) -{ - BUG_ON(!sp->root_count); - lockdep_assert_held(&kvm->mmu_lock); - - ++sp->root_count; -} - -static inline bool kvm_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *sp) -{ - lockdep_assert_held(&kvm->mmu_lock); - --sp->root_count; - - return !sp->root_count; -} - /* * Return values of handle_mmio_page_fault, mmu.page_fault, and fast_page_fault(). * diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6d4f4e305163..1929cc7a42ac 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -43,7 +43,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) static void tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) { - if (kvm_mmu_put_root(kvm, root)) + if (kvm_tdp_mmu_put_root(kvm, root)) kvm_tdp_mmu_free_root(kvm, root); } @@ -55,7 +55,7 @@ static inline bool tdp_mmu_next_root_valid(struct kvm *kvm, if (list_entry_is_head(root, &kvm->arch.tdp_mmu_roots, link)) return false; - kvm_mmu_get_root(kvm, root); + kvm_tdp_mmu_get_root(kvm, root); return true; } @@ -150,7 +150,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) /* Check for an existing root before allocating a new one. */ for_each_tdp_mmu_root(kvm, root) { if (root->role.word == role.word) { - kvm_mmu_get_root(kvm, root); + kvm_tdp_mmu_get_root(kvm, root); goto out; } } diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 683d1d69c8c8..2dc3b3ba48fb 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -8,6 +8,24 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root); +static inline void kvm_tdp_mmu_get_root(struct kvm *kvm, + struct kvm_mmu_page *root) +{ + BUG_ON(!root->root_count); + lockdep_assert_held(&kvm->mmu_lock); + + ++root->root_count; +} + +static inline bool kvm_tdp_mmu_put_root(struct kvm *kvm, + struct kvm_mmu_page *root) +{ + lockdep_assert_held(&kvm->mmu_lock); + --root->root_count; + + return !root->root_count; +} + bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end); void kvm_tdp_mmu_zap_all(struct kvm *kvm); From patchwork Wed Mar 31 21:08:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 307D8C433ED for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A98D6109F for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232833AbhCaVJa (ORCPT ); Wed, 31 Mar 2021 17:09:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232757AbhCaVJJ (ORCPT ); Wed, 31 Mar 2021 17:09:09 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AED69C061574 for ; Wed, 31 Mar 2021 14:09:08 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id n13so3605756ybp.14 for ; Wed, 31 Mar 2021 14:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=foMxttOwu6SYavjwnG7oqUJ2WlokRAISZJZJ0feAVWI=; b=HzxTpFc/w24AxMwjTtlYMwP/8TI8y0GPxp/iMkyEs8ZgTb5INW2FMmZ0BvrnErb0th vnhXeNY52N5SBFtKxNlrSJTx2m90miUf/WEck1JveNYjPzVo96FX9aJBMeBEueAJSt5J /fCjKv9wLbyN7K51nHzWtvcmpc8Qv5vvtsGipYUGvmaDiMyusliXiIZkdiStcBvUYz1A DKBUoHDVbnjSGSRvVG9Oh4iU+yGVlrG2JD5UM+bN7GKkNbGK+qPcVWayK8ePB36GogGl kglGA3fvGdmSEdiwU5bEagRZnwlmsK6eqDpr/KoDbJPBmX6P2HdeZfalogN12hfkfCOt nXQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=foMxttOwu6SYavjwnG7oqUJ2WlokRAISZJZJ0feAVWI=; b=IsmBmC78MDyiZXon9JsF2skMSPoxrZ8QJcm+DisP3zzBrkNRt0k2wz54wmJQFjtoz3 iM9hek1cUKpZYAx+ZjnfCQaw9I7h20V2019C5+eIov4bDf6+EJ/m2VLkWsGIP3FuEJ2M rn0PfShD5xOKnn/bpSZlmiLGzKp8lZkLGGSlcneyEg/n/PHUA5I4uQttgojsWZLOiJjT gFNvX1tHKvjkxy/NhuRtM5jZ9QsmCvyO0ln931lupvSI9j04lVbw0vGc2bQtp2PlgZ2P LRZPf79pogAE4+DzliVMP9RJ0um+kCRy0T9sAez+EYu+zp4GsMFYK7GzYsgrqI7AUFbY p8aA== X-Gm-Message-State: AOAM532jWLBOGPk30A67mCZuXh3JMJj1YUHV1J43D+irL5nw+rR8dbPT k96F1VWQEeDOG4/US4FThhB32F3/SUlR X-Google-Smtp-Source: ABdhPJyeLb8XwRV0troRnkuHZqf7A5YX/XYnXQT2L3T51rF0GWkqeTSsx2jj9AFTCqsKyHNf1fLEUQ4iCBtd X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a25:d093:: with SMTP id h141mr7341271ybg.292.1617224947988; Wed, 31 Mar 2021 14:09:07 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:31 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-4-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 03/13] KVM: x86/mmu: use tdp_mmu_free_sp to free roots From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Minor cleanup to deduplicate the code used to free a struct kvm_mmu_page in the TDP MMU. No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1929cc7a42ac..5a2698d64957 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -88,6 +88,12 @@ static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield); +static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) +{ + free_page((unsigned long)sp->spt); + kmem_cache_free(mmu_page_header_cache, sp); +} + void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) { gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); @@ -101,8 +107,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) zap_gfn_range(kvm, root, 0, max_gfn, false); - free_page((unsigned long)root->spt); - kmem_cache_free(mmu_page_header_cache, root); + tdp_mmu_free_sp(root); } static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, @@ -164,12 +169,6 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) return __pa(root->spt); } -static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) -{ - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); -} - /* * This is called through call_rcu in order to free TDP page table memory * safely with respect to other kernel threads that may be operating on From patchwork Wed Mar 31 21:08:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44AA5C43462 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BA6361075 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232882AbhCaVJd (ORCPT ); Wed, 31 Mar 2021 17:09:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232772AbhCaVJL (ORCPT ); Wed, 31 Mar 2021 17:09:11 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C7C5C061574 for ; Wed, 31 Mar 2021 14:09:11 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id p24so2104098pff.8 for ; Wed, 31 Mar 2021 14:09:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Z3AbaY4xF3wQFCjsVJEwWTj55cgDBbY2/T0t16utnJU=; b=fgtZ82EtNR11qrofu03windXUy6V5FdhrMvsF3mlN+GAZUIOxBF0412xBrvJk0+k2x 0Xd2bc03KqB+Mo+Y3rkcTzUw1bXy6//l2ZP0yZFksLiMxQHSrl74X1mn9DwCED9cG0I/ A1KW4Q7FN6bv8r8OOsoqd5fu3afEYQKORs3Pzj3YIBWud1aTYFf8PGVW78Au3ctaqKHm HVinDB40pVjkio6kQodAAt3re3pGfI2t6CEpiAcrVpNVCvACzrOmW896Zqqzaxjj8Smi Xn96V8NU1Z/gXfYFG1rkcy8J+AgSCBCFgn6XJLus/Kt+c1eMn4UK49h6536qzNxLNs9Z 1FcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Z3AbaY4xF3wQFCjsVJEwWTj55cgDBbY2/T0t16utnJU=; b=P4NVjKN+csfDoVXdWwWTfzFu+8W+4xRAegZEMoU3Bk8UevPHZwSi2wPPZep242m0Ie Y+81yKw6x1kTukRkCwKY4wtuqKqec9vJAQGT/O9Fzwjm0ibhtQg/0Al5VSzlY1Gy2HMw aflcLAY94X0h2tEgXMWw4Hg19q1P4at7oq/6ELgStBXe2d3MHTQNdGf/zvoRhMm4J1xi oEYvqVYuUVkPvG4/HmuGSw9XRKZArmNcvoWwjPWrobHBtXPtIjUwJQygEakjvi7IbaHK 85BrX47JqZXcmaxc9083F3deNUUTJoBXjslzRmvSe06MCnCn7ADPQdPxogq1Ob31gQ2H MzQw== X-Gm-Message-State: AOAM530hobtkq5QT0t9L09fXuEbHzyC1L5HYsyyRzobTOLboUe7QFyA+ L8/DPBg/SwkRU0jKrka6tIbpBVmAXjWz X-Google-Smtp-Source: ABdhPJxr6gqhQZp6FxXpXS+z256GvgNPXTus2sGi1j6yS0QopdYhTS2M2JlUPrKDWH+G+mMddS5biyeVT5P2 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a17:90b:fcb:: with SMTP id gd11mr39565pjb.0.1617224950491; Wed, 31 Mar 2021 14:09:10 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:32 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-5-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 04/13] KVM: x86/mmu: Merge TDP MMU put and free root From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_tdp_mmu_put_root and kvm_tdp_mmu_free_root are always called together, so merge the functions to simplify TDP MMU root refcounting / freeing. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 4 +-- arch/x86/kvm/mmu/tdp_mmu.c | 54 ++++++++++++++++++-------------------- arch/x86/kvm/mmu/tdp_mmu.h | 10 +------ 3 files changed, 28 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 618cc011f446..667d64daa82c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3154,8 +3154,8 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK); - if (is_tdp_mmu_page(sp) && kvm_tdp_mmu_put_root(kvm, sp)) - kvm_tdp_mmu_free_root(kvm, sp); + if (is_tdp_mmu_page(sp)) + kvm_tdp_mmu_put_root(kvm, sp); else if (!--sp->root_count && sp->role.invalid) kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5a2698d64957..368091adab09 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -41,10 +41,31 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) rcu_barrier(); } -static void tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) +static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end, bool can_yield); + +static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { - if (kvm_tdp_mmu_put_root(kvm, root)) - kvm_tdp_mmu_free_root(kvm, root); + free_page((unsigned long)sp->spt); + kmem_cache_free(mmu_page_header_cache, sp); +} + +void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) +{ + gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); + + lockdep_assert_held_write(&kvm->mmu_lock); + + if (--root->root_count) + return; + + WARN_ON(!root->tdp_mmu_page); + + list_del(&root->link); + + zap_gfn_range(kvm, root, 0, max_gfn, false); + + tdp_mmu_free_sp(root); } static inline bool tdp_mmu_next_root_valid(struct kvm *kvm, @@ -66,7 +87,7 @@ static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, struct kvm_mmu_page *next_root; next_root = list_next_entry(root, link); - tdp_mmu_put_root(kvm, root); + kvm_tdp_mmu_put_root(kvm, root); return next_root; } @@ -85,31 +106,6 @@ static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, #define for_each_tdp_mmu_root(_kvm, _root) \ list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) -static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end, bool can_yield); - -static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) -{ - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); -} - -void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) -{ - gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); - - lockdep_assert_held_write(&kvm->mmu_lock); - - WARN_ON(root->root_count); - WARN_ON(!root->tdp_mmu_page); - - list_del(&root->link); - - zap_gfn_range(kvm, root, 0, max_gfn, false); - - tdp_mmu_free_sp(root); -} - static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, int level) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 2dc3b3ba48fb..5d950e987fc7 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -6,7 +6,6 @@ #include hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); -void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root); static inline void kvm_tdp_mmu_get_root(struct kvm *kvm, struct kvm_mmu_page *root) @@ -17,14 +16,7 @@ static inline void kvm_tdp_mmu_get_root(struct kvm *kvm, ++root->root_count; } -static inline bool kvm_tdp_mmu_put_root(struct kvm *kvm, - struct kvm_mmu_page *root) -{ - lockdep_assert_held(&kvm->mmu_lock); - --root->root_count; - - return !root->root_count; -} +void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root); bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end); void kvm_tdp_mmu_zap_all(struct kvm *kvm); From patchwork Wed Mar 31 21:08:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61955C43470 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 48BE3610A8 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232903AbhCaVJe (ORCPT ); Wed, 31 Mar 2021 17:09:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232787AbhCaVJO (ORCPT ); Wed, 31 Mar 2021 17:09:14 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA749C061574 for ; Wed, 31 Mar 2021 14:09:13 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id p24so2104141pff.8 for ; Wed, 31 Mar 2021 14:09:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+bpx1iaqeW64f43Q16KGFU1OSLWhWp6bSNi8pmvu8l0=; b=jyJf6Erx3Vr6KJglW79Mt9jpWyBfKg1VDB6pFb+pByemXvErG6JRy0FUoOJxqxgitM 10ZFCQFLqwwLahmtcTRSEG8U5PYqEgLzB6vvamJL0TgokxBg77NuafD9FmSLUGAWlR2l TO1feqWom5WBJZ+PB0Jmc10dFZTo892Ejo3/q+DFMrYEMN3Z6/Zdy70xY80/PkSy5Y1i rfytyXljgwnj7OpY8WgeZW0p7ytlGlgDAOrcKAjZRLPVrnq/fQIfoWK/9JkQs0F4WJgh 8E79NeK4u9cKNIpcodpfc7hxvLn3royzuTgDWLm2XMza+HJBfLwc/62q0MXM2Pka3rtc Sz3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+bpx1iaqeW64f43Q16KGFU1OSLWhWp6bSNi8pmvu8l0=; b=AtQVxdrwvyvTsXVi0Nf+FisK62dMLxdxmD+dZs0jrypjZgkpjt0l3ZgaPapAu84bnC phnQUahiNwglhQYscTecbWy8FiBZGCJvjO/SAq2KGzL2Xb9FlFCXEw5X0hZFB15wv0PH 3PGBlUfvtuS993iJ6PdRYfZuYyyXcvzHhoWZzUKDdWt88x9HRZ0E+eV7ssguD9TL0cRG Vv/mPWsuxVjCcx0HdS7FIBYoC5+KpizLOHyC8D/E7KEcvnvLml/3EQWXhT5b3ezCQci3 cfeRtCS8qIDLGREEhmZbQJQ0VF43b9shSw724anYZe54JxiRDEhm9NHt5VYjCAUnFvDi HKDQ== X-Gm-Message-State: AOAM532sd0BgtjT9OGkWYrehxffXvUgM2MMxTcnFV0k/yw0VGobbfabD UuLZPeF43xGFJWBA6X/sNFNSB5P5Nkqw X-Google-Smtp-Source: ABdhPJwkN2GKx1FcEh1BYC9eoqZXYP0omsgJCtYeXf9CgVLacsMfVHLK0cd5Q4NDJf1CNg+xFmGUQnWibMfl X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a62:8811:0:b029:1ef:2105:3594 with SMTP id l17-20020a6288110000b02901ef21053594mr4789197pfd.70.1617224953241; Wed, 31 Mar 2021 14:09:13 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:33 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-6-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 05/13] KVM: x86/mmu: comment for_each_tdp_mmu_root requires MMU write lock From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, iterating over the list of TDP MMU roots can only be done under the MMU write lock, but that will change in future commits. Add a defensive comment to for_each_tdp_mmu_root noting that it must only be used under the MMU lock in write mode. That function will not be modified to work under the lock in read mode. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 368091adab09..365fa9f2f856 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -103,6 +103,7 @@ static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, tdp_mmu_next_root_valid(_kvm, _root); \ _root = tdp_mmu_next_root(_kvm, _root)) +/* Only safe under the MMU lock in write mode, without yielding. */ #define for_each_tdp_mmu_root(_kvm, _root) \ list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) From patchwork Wed Mar 31 21:08:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F12FC43600 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E04F6109D for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232716AbhCaVJf (ORCPT ); Wed, 31 Mar 2021 17:09:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231315AbhCaVJQ (ORCPT ); Wed, 31 Mar 2021 17:09:16 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D429C061574 for ; Wed, 31 Mar 2021 14:09:16 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id z1so1735288plg.14 for ; Wed, 31 Mar 2021 14:09:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PBC6qMg1yp7zSgTfCYeKU3t37KueZdRSKyb3wsQglII=; b=UyDqiSiOfuU7dROlMImswHWa8sAmJwgjp4h9gBc7H7JdTsYYUcf5lJORWi3yz90FMa QzMCIQJJRsxCwKGylVZlHvc6uZhUODoWpq4y88BsF/Rt3hz7+cB/nRVbjSpPA1Q38ElT iTeIjJxe03YjKCY603aFnf+6vOpW1B4IPwncrSY7mi8/w7uYj+0/1ileCS9vVu2MGZCx 6PYdMZtXPkteBEJvCFHMSpUgIeto0Lj3r241WZkYMjAUDahPYgK3C/ODuHvAP2y6U+Cg WqetLc2fNObcIa0uUAuME6PjT0O8yGEESezvtddatepPlLEtE5O7bgM7KVngzToEya6l s2ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PBC6qMg1yp7zSgTfCYeKU3t37KueZdRSKyb3wsQglII=; b=JR05BQpLjPq7S8KScCSxBUm22K3bbHF1qcbR2b2o7JgeIQpmlXzn3e8A8RNMGbvnsr 02ZdSE7i49U1KVI5WovVhIzzu/xRatopGllPp/cn/p6jJ4Joen59Tzkl9kSv4dJzRuTv OKcchewBNSfgTHrZD9gD8Q4eosp89FYvDujxc8Pge1c8RwIRr4wizTGHkSKHxP74x65Y +l3ry+sPwp6jURSFXduVrHR26QPbAv+UidhAb+gVIlRo36g+DF+ZaJ187/0o1t1d9ypo dNDbvNiKK1vVZUQTuhR2O3zdEm651tygFnJwbWza+jwb0glP+8EBwBGjK+1P9xhDaWDQ 9bxA== X-Gm-Message-State: AOAM533tlQP2SPEhfAHi9oAnQ5r2wF3ywGbRhHN9VRy1jzsF/+EQKpSc W5lhxXGFuYkQhPm6Zumve8TZamoUZel7 X-Google-Smtp-Source: ABdhPJyPIMGsEjIi6OVPcEjcHY5de9X+R0pwVamcqyVisnMq5J2A6sncL8zyB5j4kezCMsTCLoDLsUYGZs5g X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a05:6a00:b86:b029:205:c773:5c69 with SMTP id g6-20020a056a000b86b0290205c7735c69mr4681713pfj.60.1617224956174; Wed, 31 Mar 2021 14:09:16 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:34 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-7-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 06/13] KVM: x86/mmu: Refactor yield safe root iterator From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor the yield safe TDP MMU root iterator to be more amenable to changes in future commits which will allow it to be used under the MMU lock in read mode. Currently the iterator requires a complicated dance between the helper functions and different parts of the for loop which makes it hard to reason about. Moving all the logic into a single function simplifies the iterator substantially. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 43 ++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 365fa9f2f856..ab1d26b40164 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -68,26 +68,34 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) tdp_mmu_free_sp(root); } -static inline bool tdp_mmu_next_root_valid(struct kvm *kvm, - struct kvm_mmu_page *root) +/* + * Finds the next valid root after root (or the first valid root if root + * is NULL), takes a reference on it, and returns that next root. If root + * is not NULL, this thread should have already taken a reference on it, and + * that reference will be dropped. If no valid root is found, this + * function will return NULL. + */ +static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, + struct kvm_mmu_page *prev_root) { - lockdep_assert_held_write(&kvm->mmu_lock); + struct kvm_mmu_page *next_root; - if (list_entry_is_head(root, &kvm->arch.tdp_mmu_roots, link)) - return false; + lockdep_assert_held_write(&kvm->mmu_lock); - kvm_tdp_mmu_get_root(kvm, root); - return true; + if (prev_root) + next_root = list_next_entry(prev_root, link); + else + next_root = list_first_entry(&kvm->arch.tdp_mmu_roots, + typeof(*next_root), link); -} + if (list_entry_is_head(next_root, &kvm->arch.tdp_mmu_roots, link)) + next_root = NULL; + else + kvm_tdp_mmu_get_root(kvm, next_root); -static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, - struct kvm_mmu_page *root) -{ - struct kvm_mmu_page *next_root; + if (prev_root) + kvm_tdp_mmu_put_root(kvm, prev_root); - next_root = list_next_entry(root, link); - kvm_tdp_mmu_put_root(kvm, root); return next_root; } @@ -97,10 +105,9 @@ static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, * if exiting the loop early, the caller must drop the reference to the most * recent root. (Unless keeping a live reference is desirable.) */ -#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ - for (_root = list_first_entry(&_kvm->arch.tdp_mmu_roots, \ - typeof(*_root), link); \ - tdp_mmu_next_root_valid(_kvm, _root); \ +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ + for (_root = tdp_mmu_next_root(_kvm, NULL); \ + _root; \ _root = tdp_mmu_next_root(_kvm, _root)) /* Only safe under the MMU lock in write mode, without yielding. */ From patchwork Wed Mar 31 21:08:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5DA4C43616 for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BE9DD6109D for ; Wed, 31 Mar 2021 21:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232944AbhCaVJf (ORCPT ); Wed, 31 Mar 2021 17:09:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232796AbhCaVJU (ORCPT ); Wed, 31 Mar 2021 17:09:20 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D99BC061574 for ; Wed, 31 Mar 2021 14:09:20 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id e6so2110481pfe.3 for ; Wed, 31 Mar 2021 14:09:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=msexY4B3tOucLNk9KoxMyLN7EdGWZ7JqHSS0HucxajQ=; b=acKLmz26/bohY9s+HwE4Mtg+wIKJx7wMoLjz4HKTGvLeTUOoaD7kUOAhslbonHdj5G 45q2jRPEMZEAA+wh/ZsjURarkAtiMx4hBPcdSGRB0Ey0gCkQLGMAkayWVKizbTp56Tgb WjmJZLHLcbvSUt1dtj9hZSLKGRfmx2rYsq6RHQprvf5zUrmjjHkVrhuU7Z7kEA1Ryg7S QKz+1k3Tyx5nyuGLvp47XJNaqghN7hIUqynFeuAgtlOS3BT7ARLruwE3g6/WsA3rmIMK 661HQsYgciOtwUGoBIZIxKErMRGms20WJj7rdB4B5mtd9xOyDFlsYtuNF//iO9VERdU5 CKLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=msexY4B3tOucLNk9KoxMyLN7EdGWZ7JqHSS0HucxajQ=; b=k1RTpkkGPNT+psIcal1V+kMwV64ZRm4qwDoTJT+HP5Hjh9huxDnavm6NqFAvYye/j2 LllnaGqGoKvTsw9mUHMpCRM0EGq6vDDx+xUvPewvh2lqiAYdJcgLBlVZpC6nFHH15Afk fekuosZQka/N2tqF80adHAUmSWrzvTL9qjJrpovj5dGFjc5WZEaaTX5w9ieoRiRepP/6 9PCa3EwnUL445g/x1VYGdfLZdCBDTgQrcXcG5gG5nCJyOoedK40A5FBByG0Nbbf4C5dQ PuvxNLViYzjeg6DPtYxR6lGKciLCA8j1bI4eEW4p2k1lbTmQzIz+R+KlMPJ1DgHmGBpB HoVg== X-Gm-Message-State: AOAM5322qwQU8RQkIWZvdL3PR7Iw2nn/hVcFGssnPbGLxd143olxqkWO uss3w66Auz+hVYNrW+j3ltdH8ZEZgYyL X-Google-Smtp-Source: ABdhPJxtr+rqub+/A66p+DuiEZunTbojFitBctpExMsFyH8fHxUN9ZtDsNrxNBFMut3F/lxgTxD6O154aqNa X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a17:90b:4b87:: with SMTP id lr7mr5044303pjb.5.1617224959525; Wed, 31 Mar 2021 14:09:19 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:35 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-8-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 07/13] KVM: x86/mmu: Make TDP MMU root refcount atomic From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to parallelize more operations for the TDP MMU, make the refcount on TDP MMU roots atomic, so that a future patch can allow multiple threads to take a reference on the root concurrently, while holding the MMU lock in read mode. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu_internal.h | 6 +++++- arch/x86/kvm/mmu/tdp_mmu.c | 15 ++++++++------- arch/x86/kvm/mmu/tdp_mmu.h | 9 +++------ 3 files changed, 16 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 788dcf77c957..0a040d6a4f35 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -50,7 +50,11 @@ struct kvm_mmu_page { u64 *spt; /* hold the gfn of each spte inside spt */ gfn_t *gfns; - int root_count; /* Currently serving as active root */ + /* Currently serving as active root */ + union { + int root_count; + refcount_t tdp_mmu_root_count; + }; unsigned int unsync_children; struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ DECLARE_BITMAP(unsync_child_bitmap, 512); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ab1d26b40164..1f0b2d6124a2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -56,7 +56,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) lockdep_assert_held_write(&kvm->mmu_lock); - if (--root->root_count) + if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return; WARN_ON(!root->tdp_mmu_page); @@ -88,10 +88,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, next_root = list_first_entry(&kvm->arch.tdp_mmu_roots, typeof(*next_root), link); + while (!list_entry_is_head(next_root, &kvm->arch.tdp_mmu_roots, link) && + !kvm_tdp_mmu_get_root(kvm, next_root)) + next_root = list_next_entry(next_root, link); + if (list_entry_is_head(next_root, &kvm->arch.tdp_mmu_roots, link)) next_root = NULL; - else - kvm_tdp_mmu_get_root(kvm, next_root); if (prev_root) kvm_tdp_mmu_put_root(kvm, prev_root); @@ -158,14 +160,13 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) /* Check for an existing root before allocating a new one. */ for_each_tdp_mmu_root(kvm, root) { - if (root->role.word == role.word) { - kvm_tdp_mmu_get_root(kvm, root); + if (root->role.word == role.word && + kvm_tdp_mmu_get_root(kvm, root)) goto out; - } } root = alloc_tdp_mmu_page(vcpu, 0, vcpu->arch.mmu->shadow_root_level); - root->root_count = 1; + refcount_set(&root->tdp_mmu_root_count, 1); list_add(&root->link, &kvm->arch.tdp_mmu_roots); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5d950e987fc7..9961df505067 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -7,13 +7,10 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); -static inline void kvm_tdp_mmu_get_root(struct kvm *kvm, - struct kvm_mmu_page *root) +__must_check static inline bool kvm_tdp_mmu_get_root(struct kvm *kvm, + struct kvm_mmu_page *root) { - BUG_ON(!root->root_count); - lockdep_assert_held(&kvm->mmu_lock); - - ++root->root_count; + return refcount_inc_not_zero(&root->tdp_mmu_root_count); } void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root); From patchwork Wed Mar 31 21:08:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E876C43460 for ; Wed, 31 Mar 2021 21:10:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0470361075 for ; Wed, 31 Mar 2021 21:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232724AbhCaVJ4 (ORCPT ); Wed, 31 Mar 2021 17:09:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232600AbhCaVJZ (ORCPT ); Wed, 31 Mar 2021 17:09:25 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FA66C061574 for ; Wed, 31 Mar 2021 14:09:23 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id g9so3579447ybc.19 for ; Wed, 31 Mar 2021 14:09:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4Z5DPHIxBqQOKHjRX3419cZx487AhFXogwK482e92RI=; b=pnFNnXTzneAQwsP6VjZ8gYN4fkn7BBGwK3c0UQMOOg/06CR3IRNpdXcbAKffpMwDkW ef3PoN6slLj5Y6cOC1SxLy0PbllvEQk8i55k/cLjSU7H1v/69HFIJsmMlvTHAX411m7W lptY4GkLRRpfP/QQpXm5MyZXm8hxirUOp4ojMl0j5ot8mm7PRFXlQ+8FUnnOz8B6JV7o wrYPCCb7MAf/aBBh+qCPyV8c5yk7jPl5L32mcaYvogBjRlKX8vZVdjHfZ9rr+VwBCyr0 SH6Q5+jLAN6JtY61UCxQJ24hWhxUUWMvQ3S1YbW3+jFnM/dOn5x3wE9aeIRwV4Y7n1M+ J2uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4Z5DPHIxBqQOKHjRX3419cZx487AhFXogwK482e92RI=; b=A1Geg61Vw0IyQdaTeYFRkXOA9lG0RLWOJAAmEQBCiivqctofCG04aljPd/CBbY3+JF u/jrbbpDHk/6hZD8ei2K0MFeqiwaU5jEwrDio1hDW7an4B/Fql0vNo5dsGiuR+PfSgdo xBwi+swGY2MDK/GaIeWKCWvV0lzTwvCME7Yap6mFnadMGI/v5NxH8vl1E1ZSEKqTKMix ACIwzkk3l4nEAYIwxlYmtmNQpPffsuLLm01/rZHYpVSBdiGGecdDNM3HMR8Gwr8TfrO/ wkIxYKkIjnOxXQ1FLBLVsmxoHRduVUUl1k7vtPbnYzcFtsN6So/FTNOLR1uyGaGFj8jx hFDw== X-Gm-Message-State: AOAM532410hT4zJOKCSc0vBm1rYY/Cjl7Ovn9+O/6KPP34eCcBwo56gj P9nkDjNhgH/huuo2yjqSX9KPn0qIOFU3 X-Google-Smtp-Source: ABdhPJzlscKP3kxFyALQEMllJNjDwdOXzrBgvM2BNGOuvLo//Akt+s9hBoFhVGguxvMwQheSeD50kb+gxV6c X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a25:3346:: with SMTP id z67mr7191828ybz.443.1617224962800; Wed, 31 Mar 2021 14:09:22 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:36 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-9-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 08/13] KVM: x86/mmu: Protect the tdp_mmu_roots list with RCU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Protect the contents of the TDP MMU roots list with RCU in preparation for a future patch which will allow the iterator macro to be used under the MMU lock in read mode. Signed-off-by: Ben Gardon Reported-by: kernel test robot --- arch/x86/kvm/mmu/tdp_mmu.c | 64 +++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1f0b2d6124a2..d255125059c4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -50,6 +50,22 @@ static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) kmem_cache_free(mmu_page_header_cache, sp); } +/* + * This is called through call_rcu in order to free TDP page table memory + * safely with respect to other kernel threads that may be operating on + * the memory. + * By only accessing TDP MMU page table memory in an RCU read critical + * section, and freeing it after a grace period, lockless access to that + * memory won't use it after it is freed. + */ +static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head) +{ + struct kvm_mmu_page *sp = container_of(head, struct kvm_mmu_page, + rcu_head); + + tdp_mmu_free_sp(sp); +} + void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) { gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); @@ -61,11 +77,13 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) WARN_ON(!root->tdp_mmu_page); - list_del(&root->link); + spin_lock(&kvm->arch.tdp_mmu_pages_lock); + list_del_rcu(&root->link); + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); zap_gfn_range(kvm, root, 0, max_gfn, false); - tdp_mmu_free_sp(root); + call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback); } /* @@ -82,18 +100,21 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, lockdep_assert_held_write(&kvm->mmu_lock); + rcu_read_lock(); + if (prev_root) - next_root = list_next_entry(prev_root, link); + next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + &prev_root->link, + typeof(*prev_root), link); else - next_root = list_first_entry(&kvm->arch.tdp_mmu_roots, - typeof(*next_root), link); + next_root = list_first_or_null_rcu(&kvm->arch.tdp_mmu_roots, + typeof(*next_root), link); - while (!list_entry_is_head(next_root, &kvm->arch.tdp_mmu_roots, link) && - !kvm_tdp_mmu_get_root(kvm, next_root)) - next_root = list_next_entry(next_root, link); + while (next_root && !kvm_tdp_mmu_get_root(kvm, next_root)) + next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + &next_root->link, typeof(*next_root), link); - if (list_entry_is_head(next_root, &kvm->arch.tdp_mmu_roots, link)) - next_root = NULL; + rcu_read_unlock(); if (prev_root) kvm_tdp_mmu_put_root(kvm, prev_root); @@ -114,7 +135,8 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, /* Only safe under the MMU lock in write mode, without yielding. */ #define for_each_tdp_mmu_root(_kvm, _root) \ - list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) + list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link, \ + lockdep_is_held_write(&kvm->mmu_lock)) static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, int level) @@ -168,28 +190,14 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) root = alloc_tdp_mmu_page(vcpu, 0, vcpu->arch.mmu->shadow_root_level); refcount_set(&root->tdp_mmu_root_count, 1); - list_add(&root->link, &kvm->arch.tdp_mmu_roots); + spin_lock(&kvm->arch.tdp_mmu_pages_lock); + list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); out: return __pa(root->spt); } -/* - * This is called through call_rcu in order to free TDP page table memory - * safely with respect to other kernel threads that may be operating on - * the memory. - * By only accessing TDP MMU page table memory in an RCU read critical - * section, and freeing it after a grace period, lockless access to that - * memory won't use it after it is freed. - */ -static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head) -{ - struct kvm_mmu_page *sp = container_of(head, struct kvm_mmu_page, - rcu_head); - - tdp_mmu_free_sp(sp); -} - static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared); From patchwork Wed Mar 31 21:08:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B9E4C433ED for ; Wed, 31 Mar 2021 21:10:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 210CB61056 for ; Wed, 31 Mar 2021 21:10:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231278AbhCaVJ7 (ORCPT ); Wed, 31 Mar 2021 17:09:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232682AbhCaVJ1 (ORCPT ); Wed, 31 Mar 2021 17:09:27 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 586A4C06175F for ; Wed, 31 Mar 2021 14:09:26 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 71so525013ybl.0 for ; Wed, 31 Mar 2021 14:09:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=D3yhdA0SbfOZ8/HUhdVLoEpIVed7C/spdyqXqc6kmuA=; b=di+hCO0vOSPxyz7cYAcvQmi7yiDWrPABjxVZkRfxNGs8qhYE9c2E6mTpZMfg7Vj31i RdoWErxhipOlndt6dMpiDVYG3ScVEtG0T0/JCFyGUsIsJ2R4evxFGtTslj2BA6fLvDC0 +kwYrUs/PFU1Oe85mAC3V3pkvPCJzzeptD3T3/fS0VTYxpF9rGsT2JLTRUS5UyeJAduR W56W0X7YibxX3GtSuO56ffZpSM+lLk/9VuNXNF2tIQtImhOw4qrl0dqY0mLGHSD2Tzu5 FHI+bhZUHj/YJWXH6DJ6KbWM6XdbK+wVXJgQTpmag4sUX2OLNwSlrldrnZuk9tza6X+w YcqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=D3yhdA0SbfOZ8/HUhdVLoEpIVed7C/spdyqXqc6kmuA=; b=uiePJlRAjqYicg/ocNNMP5wE3JL0vqtUO7jFHKQT4uNZTl3tmEPpdIyhx7pd++bLmL DMA7pN6EUaKEoDCdJkH1G+SZFaCa4fPJAkfqbfkLLpM+RP+ZC4r7evMgQZXJ4QLMqbGv m1v0xf5f9WaXaJWF6x2n31jC6MWwiNfWdTb4cS/XE1I9jJyPU9AQFSggt4869LIlX9bE LOhnqOS30NvPAoc/t7MnHiAbBh/2wQgPUc7gBEaR/kKBobrCiquOWzUssW1SvZWT6VKY bbJyS4b9nmtN8bbST8RGu0/9x5bgdzFzShqCX89yigUk1Y7hZJurizsbu7St2o473dA5 8LEw== X-Gm-Message-State: AOAM533ruyu9jgHaoWMfVHx/vQfwU4H3g9wPl9iDInDYKBWZ5gVltwD6 oLdZuWgk+nrPbZ5YkQ1R1jefWiz/q6TY X-Google-Smtp-Source: ABdhPJyaBZsZ03PKHBu8tZ1QHlTBf/TrAMVN7LxSvfPVD5I8vxcHe6hXUeftn5uPbOL3KC+i6X7asymswkgg X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a25:7809:: with SMTP id t9mr7038269ybc.99.1617224965618; Wed, 31 Mar 2021 14:09:25 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:37 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-10-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 09/13] KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To reduce lock contention and interference with page fault handlers, allow the TDP MMU function to zap a GFN range to operate under the MMU read lock. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 15 ++++-- arch/x86/kvm/mmu/tdp_mmu.c | 102 ++++++++++++++++++++++++++----------- arch/x86/kvm/mmu/tdp_mmu.h | 6 ++- 3 files changed, 87 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 667d64daa82c..dcbfc784cf2f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3155,7 +3155,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK); if (is_tdp_mmu_page(sp)) - kvm_tdp_mmu_put_root(kvm, sp); + kvm_tdp_mmu_put_root(kvm, sp, false); else if (!--sp->root_count && sp->role.invalid) kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); @@ -5514,13 +5514,17 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } } + write_unlock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) { - flush = kvm_tdp_mmu_zap_gfn_range(kvm, gfn_start, gfn_end); + read_lock(&kvm->mmu_lock); + flush = kvm_tdp_mmu_zap_gfn_range(kvm, gfn_start, gfn_end, + true); if (flush) kvm_flush_remote_tlbs(kvm); - } - write_unlock(&kvm->mmu_lock); + read_unlock(&kvm->mmu_lock); + } } static bool slot_rmap_write_protect(struct kvm *kvm, @@ -5959,7 +5963,8 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) WARN_ON_ONCE(!sp->lpage_disallowed); if (is_tdp_mmu_page(sp)) { kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, - sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level)); + sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level), + false); } else { kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); WARN_ON_ONCE(sp->lpage_disallowed); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d255125059c4..0e99e4675dd4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -27,6 +27,15 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); } +static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, + bool shared) +{ + if (shared) + lockdep_assert_held_read(&kvm->mmu_lock); + else + lockdep_assert_held_write(&kvm->mmu_lock); +} + void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { if (!kvm->arch.tdp_mmu_enabled) @@ -42,7 +51,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) } static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end, bool can_yield); + gfn_t start, gfn_t end, bool can_yield, bool shared); static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { @@ -66,11 +75,12 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head) tdp_mmu_free_sp(sp); } -void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) +void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, + bool shared) { gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); - lockdep_assert_held_write(&kvm->mmu_lock); + kvm_lockdep_assert_mmu_lock_held(kvm, shared); if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return; @@ -81,7 +91,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) list_del_rcu(&root->link); spin_unlock(&kvm->arch.tdp_mmu_pages_lock); - zap_gfn_range(kvm, root, 0, max_gfn, false); + zap_gfn_range(kvm, root, 0, max_gfn, false, shared); call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback); } @@ -94,11 +104,11 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) * function will return NULL. */ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, - struct kvm_mmu_page *prev_root) + struct kvm_mmu_page *prev_root, + bool shared) { struct kvm_mmu_page *next_root; - lockdep_assert_held_write(&kvm->mmu_lock); rcu_read_lock(); @@ -117,7 +127,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, rcu_read_unlock(); if (prev_root) - kvm_tdp_mmu_put_root(kvm, prev_root); + kvm_tdp_mmu_put_root(kvm, prev_root, shared); return next_root; } @@ -127,11 +137,15 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, * This makes it safe to release the MMU lock and yield within the loop, but * if exiting the loop early, the caller must drop the reference to the most * recent root. (Unless keeping a live reference is desirable.) + * + * If shared is set, this function is operating under the MMU lock in read + * mode. In the unlikely event that this thread must free a root, the lock + * will be temporarily dropped and reacquired in write mode. */ -#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ - for (_root = tdp_mmu_next_root(_kvm, NULL); \ - _root; \ - _root = tdp_mmu_next_root(_kvm, _root)) +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \ + for (_root = tdp_mmu_next_root(_kvm, NULL, _shared); \ + _root; \ + _root = tdp_mmu_next_root(_kvm, _root, _shared)) /* Only safe under the MMU lock in write mode, without yielding. */ #define for_each_tdp_mmu_root(_kvm, _root) \ @@ -632,7 +646,8 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, * Return false if a yield was not needed. */ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, - struct tdp_iter *iter, bool flush) + struct tdp_iter *iter, bool flush, + bool shared) { /* Ensure forward progress has been made before yielding. */ if (iter->next_last_level_gfn == iter->yielded_gfn) @@ -644,7 +659,11 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, if (flush) kvm_flush_remote_tlbs(kvm); - cond_resched_rwlock_write(&kvm->mmu_lock); + if (shared) + cond_resched_rwlock_read(&kvm->mmu_lock); + else + cond_resched_rwlock_write(&kvm->mmu_lock); + rcu_read_lock(); WARN_ON(iter->gfn > iter->next_last_level_gfn); @@ -662,23 +681,33 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, * non-root pages mapping GFNs strictly within that range. Returns true if * SPTEs have been cleared and a TLB flush is needed before releasing the * MMU lock. + * * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this * function cannot yield, it will not release the MMU lock or reschedule and * the caller must ensure it does not supply too large a GFN range, or the * operation can cause a soft lockup. + * + * If shared is true, this thread holds the MMU lock in read mode and must + * account for the possibility that other threads are modifying the paging + * structures concurrently. If shared is false, this thread should hold the + * MMU lock in write mode. */ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end, bool can_yield) + gfn_t start, gfn_t end, bool can_yield, bool shared) { struct tdp_iter iter; bool flush_needed = false; + kvm_lockdep_assert_mmu_lock_held(kvm, shared); + rcu_read_lock(); tdp_root_for_each_pte(iter, root, start, end) { +retry: if (can_yield && - tdp_mmu_iter_cond_resched(kvm, &iter, flush_needed)) { + tdp_mmu_iter_cond_resched(kvm, &iter, flush_needed, + shared)) { flush_needed = false; continue; } @@ -696,8 +725,17 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; - tdp_mmu_set_spte(kvm, &iter, 0); - flush_needed = true; + if (!shared) { + tdp_mmu_set_spte(kvm, &iter, 0); + flush_needed = true; + } else if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) { + /* + * The iter must explicitly re-read the SPTE because + * the atomic cmpxchg failed. + */ + iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + goto retry; + } } rcu_read_unlock(); @@ -709,14 +747,20 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, * non-root pages mapping GFNs strictly within that range. Returns true if * SPTEs have been cleared and a TLB flush is needed before releasing the * MMU lock. + * + * If shared is true, this thread holds the MMU lock in read mode and must + * account for the possibility that other threads are modifying the paging + * structures concurrently. If shared is false, this thread should hold the + * MMU in write mode. */ -bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end) +bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end, + bool shared) { struct kvm_mmu_page *root; bool flush = false; - for_each_tdp_mmu_root_yield_safe(kvm, root) - flush |= zap_gfn_range(kvm, root, start, end, true); + for_each_tdp_mmu_root_yield_safe(kvm, root, shared) + flush |= zap_gfn_range(kvm, root, start, end, true, shared); return flush; } @@ -726,7 +770,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); bool flush; - flush = kvm_tdp_mmu_zap_gfn_range(kvm, 0, max_gfn); + flush = kvm_tdp_mmu_zap_gfn_range(kvm, 0, max_gfn, false); if (flush) kvm_flush_remote_tlbs(kvm); } @@ -893,7 +937,7 @@ static __always_inline int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, int ret = 0; int as_id; - for_each_tdp_mmu_root_yield_safe(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root, false) { as_id = kvm_mmu_page_as_id(root); slots = __kvm_memslots(kvm, as_id); kvm_for_each_memslot(memslot, slots) { @@ -933,7 +977,7 @@ static int zap_gfn_range_hva_wrapper(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t start, gfn_t end, unsigned long unused) { - return zap_gfn_range(kvm, root, start, end, false); + return zap_gfn_range(kvm, root, start, end, false, false); } int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, @@ -1098,7 +1142,7 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, for_each_tdp_pte_min_level(iter, root->spt, root->role.level, min_level, start, end) { - if (tdp_mmu_iter_cond_resched(kvm, &iter, false)) + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, false)) continue; if (!is_shadow_present_pte(iter.old_spte) || @@ -1128,7 +1172,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot, int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root_yield_safe(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root, false) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; @@ -1157,7 +1201,7 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); tdp_root_for_each_leaf_pte(iter, root, start, end) { - if (tdp_mmu_iter_cond_resched(kvm, &iter, false)) + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, false)) continue; if (spte_ad_need_write_protect(iter.old_spte)) { @@ -1193,7 +1237,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot) int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root_yield_safe(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root, false) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; @@ -1291,7 +1335,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, rcu_read_lock(); tdp_root_for_each_pte(iter, root, start, end) { - if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set)) { + if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set, false)) { spte_set = false; continue; } @@ -1326,7 +1370,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_mmu_page *root; int root_as_id; - for_each_tdp_mmu_root_yield_safe(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root, false) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 9961df505067..855e58856815 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -13,9 +13,11 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm *kvm, return refcount_inc_not_zero(&root->tdp_mmu_root_count); } -void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root); +void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, + bool shared); -bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end); +bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end, + bool shared); void kvm_tdp_mmu_zap_all(struct kvm *kvm); int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, From patchwork Wed Mar 31 21:08:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9145CC43470 for ; Wed, 31 Mar 2021 21:10:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D4BB61056 for ; Wed, 31 Mar 2021 21:10:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232931AbhCaVKB (ORCPT ); Wed, 31 Mar 2021 17:10:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232817AbhCaVJa (ORCPT ); Wed, 31 Mar 2021 17:09:30 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C29FFC061761 for ; Wed, 31 Mar 2021 14:09:29 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id y10so1735448plt.21 for ; Wed, 31 Mar 2021 14:09:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rflZjTREsJ/gYfIbS3+5us9ybJwv9yTRsjWAKMCJ5Y4=; b=oob7KpmemiWewo+Ci8QQhYn901DVVDpmFfmk2PhtTZNXo8tdI6jt3YAIhheWN4y/a/ NjUWpruPwj6lc0IEr9hrDP6eFSZepprjHVb3e8sP+t+zMOHnE/I9vKOkyzzwzZjIjZy3 mVeKkPALWk8XtaO8huIX/XvvOvJLwiK5qmdvwPSpFkoznXPL7PHj3QIfEuzxDP4fKsvG 7OFquoGYhQPjY8zs4Swlmy/sZTNnydpGw1cTshXhfmLyp6pCtMGLVd48F+D+B/fbYEjX 0KxJjSeOzeEX9g/68/zwsIe6OpJbbmSOtQyRgKbQ92SwDaZEkUUc82P0Bn3vsyfsvDR4 slJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rflZjTREsJ/gYfIbS3+5us9ybJwv9yTRsjWAKMCJ5Y4=; b=fB+yyNaJ7/nEX6B73joOzddaJoK0+X/3LuhsV0vRV5eblVqeUCY/J3qEEBh8xf0rNf hNADjw04DfyRjim/pxPz1d3xq8e9AJOQvYwMKd7nuQLF9dLa/ngJpC6EyXWeAqdlcBDi TkfBtTMKBBAayQz6J6NazdAvFGs5WenkLX2OaVu//H7S+TbPPq598cG83LGR1v/QXNJi aYd4VXPuTbEwn6IuFDJluNeWMFA5QjVbFJ73P7aIuIH8HqFw5pmZrURgTsTCGvcRpLr2 7FtwDBO+zC9wO39QHNn0m5CJLYO3j3S01jBcJPf89FhGy0byYLTgswdxp01r5+LlR4Jd 0FLw== X-Gm-Message-State: AOAM531nulVzIkYMPH4/937S8d696Ioj3PUWbvnDMQNdmvR+bgEERc09 aiuXywuO8TFD0vVE+JBljYsCk83pH3vn X-Google-Smtp-Source: ABdhPJwnq8zO7FqwGFOoGocE4klDVpKwN7Hey65gI/W+fuTsGzvz/nAsWGBZ+Es+tBL9ISU0/BdhrvwYi8jq X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a17:90a:8b97:: with SMTP id z23mr39673pjn.1.1617224968753; Wed, 31 Mar 2021 14:09:28 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:38 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-11-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 10/13] KVM: x86/mmu: Allow zapping collapsible SPTEs to use MMU read lock From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To speed the process of disabling dirty logging, change the TDP MMU function which zaps collapsible SPTEs to run under the MMU read lock. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 9 ++++++--- arch/x86/kvm/mmu/tdp_mmu.c | 17 +++++++++++++---- 2 files changed, 19 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dcbfc784cf2f..81967b4e7d76 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5610,10 +5610,13 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, write_lock(&kvm->mmu_lock); slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); write_unlock(&kvm->mmu_lock); + + if (is_tdp_mmu_enabled(kvm)) { + read_lock(&kvm->mmu_lock); + kvm_tdp_mmu_zap_collapsible_sptes(kvm, memslot); + read_unlock(&kvm->mmu_lock); + } } void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 0e99e4675dd4..862acb868abd 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1335,7 +1335,8 @@ static void zap_collapsible_spte_range(struct kvm *kvm, rcu_read_lock(); tdp_root_for_each_pte(iter, root, start, end) { - if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set, false)) { +retry: + if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set, true)) { spte_set = false; continue; } @@ -1350,8 +1351,14 @@ static void zap_collapsible_spte_range(struct kvm *kvm, pfn, PG_LEVEL_NUM)) continue; - tdp_mmu_set_spte(kvm, &iter, 0); - + if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) { + /* + * The iter must explicitly re-read the SPTE because + * the atomic cmpxchg failed. + */ + iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + goto retry; + } spte_set = true; } @@ -1370,7 +1377,9 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_mmu_page *root; int root_as_id; - for_each_tdp_mmu_root_yield_safe(kvm, root, false) { + lockdep_assert_held_read(&kvm->mmu_lock); + + for_each_tdp_mmu_root_yield_safe(kvm, root, true) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; From patchwork Wed Mar 31 21:08:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30282C43600 for ; Wed, 31 Mar 2021 21:10:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5FCD61056 for ; Wed, 31 Mar 2021 21:10:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233119AbhCaVKE (ORCPT ); Wed, 31 Mar 2021 17:10:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232883AbhCaVJd (ORCPT ); Wed, 31 Mar 2021 17:09:33 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 423A7C06174A for ; Wed, 31 Mar 2021 14:09:32 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id f20so2088701pgj.6 for ; Wed, 31 Mar 2021 14:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=g4yhsGdq/iXeJP11WQ8LbSIQn68/bjXt6uk7V5jIWEA=; b=UST8JH9MDT8t1R7djBJIoYHjYxYXADs6eFafwKsZnUgDg3bQNZSUccc0+ydBqiz1Ze AhEdXbRNSzIKsUtT21vNZ0j7oGU+1Ge/KzXmlRujzOwmuXPDBQOoHSbVOLik1uFbNBhn 0ZMbvLgObsPAmAnmIJI8Odnu2mVs00kqDgK3AuEkp821thrx4rvKR1HhbN8I8QslhIJE at85bkpgHiPCL1FmQ3EjRpCsMAlDaQltkk6cZXBwjPYIuUGEY3U1imWKSJITfZlh07oR t1ZDTt6rmj87Qg6wQUTL0J4xplIFhjhHnzDb9poKvBgc+4Lqb4lQcfacw55CojY+uCYc oLzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=g4yhsGdq/iXeJP11WQ8LbSIQn68/bjXt6uk7V5jIWEA=; b=V1e1gxBdnD8omXNh1AO5Xe2zVn+d4leQ1tx79z3LDT0vT52WAEhQqnNU42PI6ft7W/ 1i/eHDOP4WKh3lIPpwtDcap6mWT1n8JUXvWvAWRLcgbwT1qyPT+p1mAcisteXdkLn+c5 x8LOxYPDmi4o15EtLYuSYqSyrYDSpKbCLEyvxiEIm0IXvbaOQfVpwZ+BcXeX3QmYZIhO 6dhGmO8RSqwOSZtJPxg4WsF7OGOSuId0BAj3cr4maxXImdsVVwsTHPm3th22nZt9kykm eipJDeEaxdbVQbnAnyZBTPmyyTB5rv0mPC2pPozU2IfyFSQS+i8ZC808yGPzeImQl2cQ nuRg== X-Gm-Message-State: AOAM530L23oTeta1rwcBtgrYjm+FrQbUiWtIx3KYcrmr3XuHc0QZTlS0 AWYrcAL0//KlosfkmIHhmAPjJe9lKySr X-Google-Smtp-Source: ABdhPJwsLiU1eIc2FCVKYgkLLh79Ri95VB2mDC1fL88Gb2BrJ/QT2sNyV/2IbkfKnBV+NeJiFqLJ0xAlIA/w X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a17:902:e882:b029:e6:caba:fff6 with SMTP id w2-20020a170902e882b02900e6cabafff6mr4843196plg.73.1617224972205; Wed, 31 Mar 2021 14:09:32 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:39 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-12-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 11/13] KVM: x86/mmu: Allow enabling / disabling dirty logging under MMU read lock From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To reduce lock contention and interference with page fault handlers, allow the TDP MMU functions which enable and disable dirty logging to operate under the MMU read lock. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 16 +++++++--- arch/x86/kvm/mmu/tdp_mmu.c | 62 ++++++++++++++++++++++++++++++-------- 2 files changed, 61 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 81967b4e7d76..bf535c9f7ff2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5543,10 +5543,14 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, write_lock(&kvm->mmu_lock); flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - if (is_tdp_mmu_enabled(kvm)) - flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K); write_unlock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) { + read_lock(&kvm->mmu_lock); + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K); + read_unlock(&kvm->mmu_lock); + } + /* * We can flush all the TLBs out of the mmu lock without TLB * corruption since we just change the spte from writable to @@ -5641,10 +5645,14 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, write_lock(&kvm->mmu_lock); flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - if (is_tdp_mmu_enabled(kvm)) - flush |= kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); write_unlock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) { + read_lock(&kvm->mmu_lock); + flush |= kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); + read_unlock(&kvm->mmu_lock); + } + /* * It's also safe to flush TLBs out of mmu lock here as currently this * function is only used for dirty logging, in which case flushing TLB diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 862acb868abd..0c90dc034819 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -491,8 +491,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, } /* - * tdp_mmu_set_spte_atomic - Set a TDP MMU SPTE atomically and handle the - * associated bookkeeping + * tdp_mmu_set_spte_atomic_no_dirty_log - Set a TDP MMU SPTE atomically + * and handle the associated bookkeeping, but do not mark the page dirty + * in KVM's dirty bitmaps. * * @kvm: kvm instance * @iter: a tdp_iter instance currently on the SPTE that should be set @@ -500,9 +501,9 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * Returns: true if the SPTE was set, false if it was not. If false is returned, * this function will have no side-effects. */ -static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, - struct tdp_iter *iter, - u64 new_spte) +static inline bool tdp_mmu_set_spte_atomic_no_dirty_log(struct kvm *kvm, + struct tdp_iter *iter, + u64 new_spte) { lockdep_assert_held_read(&kvm->mmu_lock); @@ -517,9 +518,22 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, new_spte) != iter->old_spte) return false; - handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - new_spte, iter->level, true); + __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, true); + handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); + + return true; +} + +static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, + struct tdp_iter *iter, + u64 new_spte) +{ + if (!tdp_mmu_set_spte_atomic_no_dirty_log(kvm, iter, new_spte)) + return false; + handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, + iter->old_spte, new_spte, iter->level); return true; } @@ -1142,7 +1156,8 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, for_each_tdp_pte_min_level(iter, root->spt, root->role.level, min_level, start, end) { - if (tdp_mmu_iter_cond_resched(kvm, &iter, false, false)) +retry: + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; if (!is_shadow_present_pte(iter.old_spte) || @@ -1152,7 +1167,15 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, new_spte = iter.old_spte & ~PT_WRITABLE_MASK; - tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + if (!tdp_mmu_set_spte_atomic_no_dirty_log(kvm, &iter, + new_spte)) { + /* + * The iter must explicitly re-read the SPTE because + * the atomic cmpxchg failed. + */ + iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + goto retry; + } spte_set = true; } @@ -1172,7 +1195,9 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot, int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root_yield_safe(kvm, root, false) { + lockdep_assert_held_read(&kvm->mmu_lock); + + for_each_tdp_mmu_root_yield_safe(kvm, root, true) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; @@ -1201,7 +1226,8 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); tdp_root_for_each_leaf_pte(iter, root, start, end) { - if (tdp_mmu_iter_cond_resched(kvm, &iter, false, false)) +retry: + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; if (spte_ad_need_write_protect(iter.old_spte)) { @@ -1216,7 +1242,15 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, continue; } - tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + if (!tdp_mmu_set_spte_atomic_no_dirty_log(kvm, &iter, + new_spte)) { + /* + * The iter must explicitly re-read the SPTE because + * the atomic cmpxchg failed. + */ + iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); + goto retry; + } spte_set = true; } @@ -1237,7 +1271,9 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot) int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root_yield_safe(kvm, root, false) { + lockdep_assert_held_read(&kvm->mmu_lock); + + for_each_tdp_mmu_root_yield_safe(kvm, root, true) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; From patchwork Wed Mar 31 21:08:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D11EC43603 for ; Wed, 31 Mar 2021 21:10:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1800C61073 for ; Wed, 31 Mar 2021 21:10:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233143AbhCaVKG (ORCPT ); Wed, 31 Mar 2021 17:10:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232967AbhCaVJg (ORCPT ); Wed, 31 Mar 2021 17:09:36 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 121EFC061760 for ; Wed, 31 Mar 2021 14:09:36 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id f75so3600016yba.8 for ; Wed, 31 Mar 2021 14:09:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eskadvO96k3pneANThCp7BKmWYHI2ZyppDiy24nRtnU=; b=ZNhWId0uRHEhcDaAEd0M+K8xEu8VkF9gCgl+jE8TAn0noCeIzMzHMNMryIZIYQLkSH 4ZKR95723NNgMyMUMIFcWwEX4p0WBlE0hX5Cae9bzRjnMHRvQ4DMS5JMJhz7rAvUQnks BmQuV2ch1Yzm/Ouj4V7cHkZ6q0w9i4fwFRRSLSRl2S6Sr0lW6Aj/QQ8eJcOdRlQOwt8U kyBEIuggTOia8+5h3uBAMy+n9qtAqs2eIjIbkUCnYLm2UFErwm1so1B8Lz0ARw9ovDCs 5hentAQ2bFLK/1pb1h6jkF8C8fJsiv+7LtlzbFy4uLwQxeTwbm+8U6Z+KWkInFAwlOF2 LWKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eskadvO96k3pneANThCp7BKmWYHI2ZyppDiy24nRtnU=; b=QxtaYNl7Zc6+JSp31j1WvESByF2JdrTtJFVbkhWD9Nn8hLvwm/ObVRRc/mhUFc96GI w1+kQzKa/1xd/woTbIkAJmq2fqF2V+d68P4IZskP/Sl6WVSo82YIX8frqmjzlyihAz40 PRo0vmnKVrf6PPIcq4RLxHDcA0Y4TiEuvTa6ICKrR5Ir6VLlew0wIyjjz4fMRPBTLj83 0EXzKuJ6e6eF8mE3diy21eCHs2qJkozFFSARksigebFjwWX7NN6D5INVBgD0OGK7Igfw mdHepvFIV5VoGSOxL6UOSrbxDJS6GCKCIJhJ8gk71EWnycbFFQfwDdl19xqNuMi2d8l3 Pg1w== X-Gm-Message-State: AOAM532Tsj6w6VaWFHPaO+UCdO+EqkPTIeB/BGOZf5zSrDpX+2zy+8jW Rr0NHJRaX8wyWVh1O6qHSW9bZ+jy1275 X-Google-Smtp-Source: ABdhPJyfQtYhFW2ppM84KeLTBpgHFcYS7dJjRLhpqN+OnPuOl2ss6HgsxRRuWZICOhJTiHuUMc86SR3/XdTq X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a25:d2d3:: with SMTP id j202mr7290798ybg.157.1617224975277; Wed, 31 Mar 2021 14:09:35 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:40 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-13-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 12/13] KVM: x86/mmu: Fast invalidation for TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Provide a real mechanism for fast invalidation by marking roots as invalid so that their reference count will quickly fall to zero and they will be torn down. One negative side affect of this approach is that a vCPU thread will likely drop the last reference to a root and be saddled with the work of tearing down an entire paging structure. This issue will be resolved in a later commit. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/tdp_mmu.c | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 5 +++++ 3 files changed, 22 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bf535c9f7ff2..49b7097fb55b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5430,6 +5430,9 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) write_lock(&kvm->mmu_lock); trace_kvm_mmu_zap_all_fast(kvm); + if (is_tdp_mmu_enabled(kvm)) + kvm_tdp_mmu_invalidate_roots(kvm); + /* * Toggle mmu_valid_gen between '0' and '1'. Because slots_lock is * held for the entire duration of zapping obsolete pages, it's @@ -5451,9 +5454,6 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) kvm_zap_obsolete_pages(kvm); - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_all(kvm); - write_unlock(&kvm->mmu_lock); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 0c90dc034819..428ff6778426 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -789,6 +789,20 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) kvm_flush_remote_tlbs(kvm); } +/* + * This function depends on running in the same MMU lock cirical section as + * kvm_reload_remote_mmus. Since this is in the same critical section, no new + * roots will be created between this function and the MMU reload signals + * being sent. + */ +void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm) +{ + struct kvm_mmu_page *root; + + for_each_tdp_mmu_root(kvm, root) + root->role.invalid = true; +} + /* * Installs a last-level SPTE to handle a TDP page fault. * (NPT/EPT violation/misconfiguration) diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 855e58856815..ff4978817fb8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -10,6 +10,9 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm *kvm, struct kvm_mmu_page *root) { + if (root->role.invalid) + return false; + return refcount_inc_not_zero(&root->tdp_mmu_root_count); } @@ -20,6 +23,8 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end, bool shared); void kvm_tdp_mmu_zap_all(struct kvm *kvm); +void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm); + int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, int map_writable, int max_level, kvm_pfn_t pfn, bool prefault); From patchwork Wed Mar 31 21:08:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12176353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E102C4360C for ; Wed, 31 Mar 2021 21:10:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33C9E61075 for ; Wed, 31 Mar 2021 21:10:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233168AbhCaVKH (ORCPT ); Wed, 31 Mar 2021 17:10:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233017AbhCaVJk (ORCPT ); Wed, 31 Mar 2021 17:09:40 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51876C061761 for ; Wed, 31 Mar 2021 14:09:39 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 71so525431ybl.0 for ; Wed, 31 Mar 2021 14:09:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=b52JDbJCQ0WWnStpBK4r+Ue/B9wKBNJL13KbqXYmGYg=; b=VBh/zpM0UblBio97G9OLhp5ZW9nq+0UT6vMkJslIRHEWu0AczvOramp6CqciEKTbNr c715S9mhEe5cbxZU9ZKbk1c1CG5XtzwwOhk6j1LwoDVLVCyuMpu+TIsXYpJteyUEGVHZ r//H3my+d7NQTcNRv5sL8ewSOPRhIiz1Px5cmQ1rpZatblglnNTdxIuuC8j5oiWrsU7z kOVnAcDmNTrthQF2PGanzEe2RmfNCgzGlvG/1CO304wIIbZVHFpASCsdKZA4q1e3tATH 0TVS8ieoNq7rVJ1WEHQzrcU13869pfdhF6H29zvvOHkoH6Q/lLNec28NDz88L14aTLKD wxrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=b52JDbJCQ0WWnStpBK4r+Ue/B9wKBNJL13KbqXYmGYg=; b=HdY0dh2WqxtWxJ0YPQpwfPQ8iQpizA/ZpWQR0OBY+WwINwhjk/RenKlSv9Hwu7GFbi TURyymZyWD1CjA5sICDL0cAxrOLE0IZ2frb0X/+lA7XQcI5Jf/tvxTsR1hzj+NvxoZoe 6Xro5qPF3OVXoCW7vVQXhoLsALP7QXwp2vAm3EDqJkN/mDtomh9ECtOCLkbIg85sCt50 0/Gs78E/JzF4s7EqM0W7RxmbTorKhxFqpc/qQ3hg2bIikXdRFOiAIowq4CT3aC3mqqZ0 daICgNQiiU8Mo92uRaXVo4j9sAXWHwMrkNXeV2kL2vA5VidUNI4AGU6k5fsUyW/dT24s 8iRA== X-Gm-Message-State: AOAM5303HUjp6N09avDJi7uFIZe/YVtxvidLyRgL2sE1RfCAJB03+jac aRpxNsp90snFbLRp/pHUL4N+TVWa0QWu X-Google-Smtp-Source: ABdhPJz6rrYZkH/v2aljBcBdIGyDW54b6cdwjWQo702TbKhTmeLYyssCI4/6m0kUoFgKEZMQdNNjyMP//c+z X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:8026:6888:3d55:3842]) (user=bgardon job=sendgmr) by 2002:a25:aa48:: with SMTP id s66mr7884275ybi.121.1617224978610; Wed, 31 Mar 2021 14:09:38 -0700 (PDT) Date: Wed, 31 Mar 2021 14:08:41 -0700 In-Reply-To: <20210331210841.3996155-1-bgardon@google.com> Message-Id: <20210331210841.3996155-14-bgardon@google.com> Mime-Version: 1.0 References: <20210331210841.3996155-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 13/13] KVM: x86/mmu: Tear down roots in fast invalidation thread From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To avoid saddling a vCPU thread with the work of tearing down an entire paging structure, take a reference on each root before they become obsolete, so that the thread initiating the fast invalidation can tear down the paging structure and (most likely) release the last reference. As a bonus, this teardown can happen under the MMU lock in read mode so as not to block the progress of vCPU threads. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 6 ++++ arch/x86/kvm/mmu/tdp_mmu.c | 74 +++++++++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/tdp_mmu.h | 1 + 3 files changed, 80 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 49b7097fb55b..22742619698d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5455,6 +5455,12 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) kvm_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); + + if (is_tdp_mmu_enabled(kvm)) { + read_lock(&kvm->mmu_lock); + kvm_tdp_mmu_zap_all_fast(kvm); + read_unlock(&kvm->mmu_lock); + } } static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 428ff6778426..5498df7e2e1f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -794,13 +794,85 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) * kvm_reload_remote_mmus. Since this is in the same critical section, no new * roots will be created between this function and the MMU reload signals * being sent. + * Take a reference on all roots so that this thread can do the bulk of + * the work required to free the roots once they are invalidated. Without + * this reference, a vCPU thread might drop the last reference to a root + * and get stuck with tearing down the entire paging structure. */ void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm) { struct kvm_mmu_page *root; for_each_tdp_mmu_root(kvm, root) - root->role.invalid = true; + if (refcount_inc_not_zero(&root->tdp_mmu_root_count)) + root->role.invalid = true; +} + +static struct kvm_mmu_page *next_invalidated_root(struct kvm *kvm, + struct kvm_mmu_page *prev_root) +{ + struct kvm_mmu_page *next_root; + + if (prev_root) + next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + &prev_root->link, + typeof(*prev_root), link); + else + next_root = list_first_or_null_rcu(&kvm->arch.tdp_mmu_roots, + typeof(*next_root), link); + + while (next_root && !(next_root->role.invalid && + refcount_read(&next_root->tdp_mmu_root_count))) + next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + &next_root->link, + typeof(*next_root), link); + + return next_root; +} + +/* + * Since kvm_tdp_mmu_invalidate_roots has acquired a reference to each + * invalidated root, they will not be freed until this function drops the + * reference. Before dropping that reference, tear down the paging + * structure so that whichever thread does drop the last reference + * only has to do a trivial ammount of work. Since the roots are invalid, + * no new SPTEs should be created under them. + */ +void kvm_tdp_mmu_zap_all_fast(struct kvm *kvm) +{ + gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); + struct kvm_mmu_page *next_root; + struct kvm_mmu_page *root; + bool flush = false; + + lockdep_assert_held_read(&kvm->mmu_lock); + + rcu_read_lock(); + + root = next_invalidated_root(kvm, NULL); + + while (root) { + next_root = next_invalidated_root(kvm, root); + + rcu_read_unlock(); + + flush |= zap_gfn_range(kvm, root, 0, max_gfn, true, true); + + /* + * Put the reference acquired in + * kvm_tdp_mmu_invalidate_roots + */ + kvm_tdp_mmu_put_root(kvm, root, true); + + root = next_root; + + rcu_read_lock(); + } + + rcu_read_unlock(); + + if (flush) + kvm_flush_remote_tlbs(kvm); } /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index ff4978817fb8..d6d98f9047cd 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -24,6 +24,7 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end, void kvm_tdp_mmu_zap_all(struct kvm *kvm); void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm); +void kvm_tdp_mmu_zap_all_fast(struct kvm *kvm); int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, int map_writable, int max_level, kvm_pfn_t pfn,