From patchwork Tue Apr 27 22:36:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12227647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E312C433B4 for ; Tue, 27 Apr 2021 22:36:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E628610A2 for ; Tue, 27 Apr 2021 22:36:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237153AbhD0Wh2 (ORCPT ); Tue, 27 Apr 2021 18:37:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237070AbhD0Wh1 (ORCPT ); Tue, 27 Apr 2021 18:37:27 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 737B6C061760 for ; Tue, 27 Apr 2021 15:36:41 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id c1-20020a5b0bc10000b02904e7c6399b20so39456037ybr.12 for ; Tue, 27 Apr 2021 15:36:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hHwlLUDiyb5KZWQJivPUqbRlaxgxwy3BBJV8Ity3Kbo=; b=otlddGc/Vgp5GvUSMNcJTh8thjsLeNZa2+H0rpxyB2THPXRytUVAU/kVjwWQcB31l/ +cHc9GCRnMX9FQdZMldUXENz/gkNNeJFSYobkrbLLAt8LA7C/VFJ/EmXPXrMSryfOZsw BB8SpdcCBMO2jSPQoDzsReBH2cZjGgx5Hc0PqfyBSQ/gW16xMTQkqx2U4ZkVJ80viWXh 1qSweEBrnpVOCheuABaVrMshGTfAkupn9JVVTN234ddDv2ekSNWLNJ1NhRShqonPy5ea g9PW9noDnMdvxKNt3ZQRHkHuoVshsa5ITP5sD121l5t7Qw+vrxjPjPmjOacBrZ7IjZdj /uCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hHwlLUDiyb5KZWQJivPUqbRlaxgxwy3BBJV8Ity3Kbo=; b=BZO0dcrvWvRhyC5yEhct98zXEeB2mh/G9rf5mSgwfiYazjNKFjk7+1cXV9vRfdCauj UPUTSjj1cRRrLHMc9C8VgLwr7zR/dk83crYFfSvMJ3+qFOu2J4l9fK9UqYOnJwsY9Svu 9z9nLDlErnNKRZ3Itbwk6+KGGzQAZKMTuTv/fK7BOpmMRCG5mr0ALXWTXpU1CHrJjukk 0RzwSm28uVgP8Vrit3dLH0QMoIGZ7oarDrZcUMnZXHureLJEkNpXiZTbGLVXts1UTKRK CWYkrc1gf0fQ/IesoUJkOhUCeJp8LNBnHpkrG/UepCEwIiWyVVlXkOxBvdLgaNzXuwJE JFNw== X-Gm-Message-State: AOAM533iortId5kbsm+0CL1Ez/Bb6bgandPvF+oxvfywgKmODSOfK9Cz mQ2Zxic1koVjSF/piBLcZZLLE6ZvmYEJ X-Google-Smtp-Source: ABdhPJxz5xOYP7MEBqOp+wqGfgfhga68bYGbJL5xGkLN3JunTQIxF+Yh/jbsJ4izS0k4xTwMTQdvpUF+tnPr X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:d0b5:c590:c6b:bd9c]) (user=bgardon job=sendgmr) by 2002:a25:3bd7:: with SMTP id i206mr5225144yba.150.1619563000673; Tue, 27 Apr 2021 15:36:40 -0700 (PDT) Date: Tue, 27 Apr 2021 15:36:30 -0700 In-Reply-To: <20210427223635.2711774-1-bgardon@google.com> Message-Id: <20210427223635.2711774-2-bgardon@google.com> Mime-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 1/6] KVM: x86/mmu: Track if shadow MMU active From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a field to each VM to track if the shadow / legacy MMU is actually in use. If the shadow MMU is not in use, then that knowledge opens the door to other optimizations which will be added in future patches. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 10 +++++++++- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 6 ++++-- arch/x86/kvm/mmu/tdp_mmu.h | 4 ++-- 5 files changed, 19 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ad22d4839bcc..3900dcf2439e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1122,6 +1122,8 @@ struct kvm_arch { */ spinlock_t tdp_mmu_pages_lock; #endif /* CONFIG_X86_64 */ + + bool shadow_mmu_active; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 930ac8a7e7c9..3975272321d0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3110,6 +3110,11 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return ret; } +void activate_shadow_mmu(struct kvm *kvm) +{ + kvm->arch.shadow_mmu_active = true; +} + static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, struct list_head *invalid_list) { @@ -3280,6 +3285,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } + activate_shadow_mmu(vcpu->kvm); + write_lock(&vcpu->kvm->mmu_lock); r = make_mmu_pages_available(vcpu); if (r < 0) @@ -5467,7 +5474,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) { struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; - kvm_mmu_init_tdp_mmu(kvm); + if (!kvm_mmu_init_tdp_mmu(kvm)) + activate_shadow_mmu(kvm); node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index f2546d6d390c..297a911c018c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -165,4 +165,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +void activate_shadow_mmu(struct kvm *kvm); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 83cbdbe5de5a..5342aca2c8e0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -14,10 +14,10 @@ static bool __read_mostly tdp_mmu_enabled = false; module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); /* Initializes the TDP MMU for the VM, if enabled. */ -void kvm_mmu_init_tdp_mmu(struct kvm *kvm) +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) - return; + return false; /* This should not be changed for the lifetime of the VM. */ kvm->arch.tdp_mmu_enabled = true; @@ -25,6 +25,8 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); + + return true; } static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5fdf63090451..b046ab5137a1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -80,12 +80,12 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); #ifdef CONFIG_X86_64 -void kvm_mmu_init_tdp_mmu(struct kvm *kvm); +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } #else -static inline void kvm_mmu_init_tdp_mmu(struct kvm *kvm) {} +static inline bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return false; } static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } From patchwork Tue Apr 27 22:36:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12227649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 586E0C433ED for ; Tue, 27 Apr 2021 22:36:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DF756100A for ; Tue, 27 Apr 2021 22:36:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238310AbhD0Wha (ORCPT ); Tue, 27 Apr 2021 18:37:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237413AbhD0Wh3 (ORCPT ); Tue, 27 Apr 2021 18:37:29 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF4DAC061574 for ; Tue, 27 Apr 2021 15:36:43 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id i8-20020a6548480000b02901fb8ebd5714so20394722pgs.12 for ; Tue, 27 Apr 2021 15:36:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wDjj6OoSlDxcamVwTg37UVTpl6UOTvKdh3EJPSCL4Fg=; b=YFzhThULnb0ghZbr7kuUJlhyEFbhYULuVkGjxgUE1ufPyAvKVbUcGetANYy6Fk3EOH Bfyk9Mf9wRWmQu1/ukG01uBgJytGTtr3SPXoHK2SG0Fq/9UrEbBzWEOaJ6k1J4r/mq6B Hbm2T7ipziR6BmrUAQPvm6ZFn9FIn5P4CWR5cUf1Rm5mF0OvAmi2kiYm65C2UTdYIJ8I VOK8Sag7BDA2J7iOjpXDDRpnCmVYbU04BPIi/FtU9w8yL1V/MUC9nneulr9wvUC5//wW fU1T1GvJRdJyKjxzefNQmpYLMQZg/ZtZHTd6DOsXFjcHbZ9i0LAEmhU8+5yVu19V8suw E8sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wDjj6OoSlDxcamVwTg37UVTpl6UOTvKdh3EJPSCL4Fg=; b=fpcYC4WH3g0qy21Ek3StBSCz3vptHMOE2aAYRxqKHA9zKnI1gi4oilMRxeElfRj4td dN9eAG95Lwm9PZXV7UmW/o9K9AzDHnt4/DEbH/XKNaDT7svzDD6HLtI+pAllyWazUsqm cTY0aZCFyAICq0tRR/AD0muiPt3OQ0W+yE8nkkkSo19VLXDAx4nRmf9HpTQ2GnQd46/T XNuTOOVg9lgnEVfQZh6jhyhZqxRQgwNLwW6wWTCI4zFEhwO9voQ+1XFSs/wZXo43k6Fb w3UI1DW+L/Z3cMAfHN5tlbv51pma7DewuZ3BvSG8THven7bVNVTPGQIoCCP+REmi75c4 kGbQ== X-Gm-Message-State: AOAM531n+O/RyXoKYS2UG/j+QznIXFsfQskfxv8o0G/JZGuXBGVBQFsO ye7iJprohACeeIQgEtnFJeTcwxWeLSfR X-Google-Smtp-Source: ABdhPJyoIIqazE2aoqCHCprwrvkKIG8hFOdYzQs8Oxz3JcFehdsJDIrwHP7nlXhdkIvQyPpjYZTrYd+XonY3 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:d0b5:c590:c6b:bd9c]) (user=bgardon job=sendgmr) by 2002:aa7:8198:0:b029:274:8a92:51b5 with SMTP id g24-20020aa781980000b02902748a9251b5mr15991800pfi.5.1619563003367; Tue, 27 Apr 2021 15:36:43 -0700 (PDT) Date: Tue, 27 Apr 2021 15:36:31 -0700 In-Reply-To: <20210427223635.2711774-1-bgardon@google.com> Message-Id: <20210427223635.2711774-3-bgardon@google.com> Mime-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 2/6] KVM: x86/mmu: Skip rmap operations if shadow MMU inactive From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the shadow MMU is not in use, and only the TDP MMU is being used to manage the memory mappings for a VM, then many rmap operations can be skipped as they are guaranteed to be no-ops. This saves some time which would be spent on the rmap operation. It also avoids acquiring the MMU lock in write mode for many operations. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 128 +++++++++++++++++++++++++---------------- 1 file changed, 77 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3975272321d0..e252af46f205 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1189,6 +1189,10 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); + + if (!kvm->arch.shadow_mmu_active) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1218,6 +1222,10 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); + + if (!kvm->arch.shadow_mmu_active) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int i; bool write_protected = false; - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + if (kvm->arch.shadow_mmu_active) { + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + write_protected |= __rmap_write_protect(kvm, rmap_head, + true); + } } if (is_tdp_mmu_enabled(kvm)) @@ -1433,9 +1444,10 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); + if (kvm->arch.shadow_mmu_active) + flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); @@ -1445,9 +1457,10 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); + if (kvm->arch.shadow_mmu_active) + flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); @@ -1500,9 +1513,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); + if (kvm->arch.shadow_mmu_active) + young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1512,9 +1526,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); + if (kvm->arch.shadow_mmu_active) + young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); @@ -5447,7 +5462,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_reload_remote_mmus(kvm); - kvm_zap_obsolete_pages(kvm); + if (kvm->arch.shadow_mmu_active) + kvm_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); @@ -5498,29 +5514,29 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; - write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { - gfn_t start, end; - - start = max(gfn_start, memslot->base_gfn); - end = min(gfn_end, memslot->base_gfn + memslot->npages); - if (start >= end) - continue; + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + gfn_t start, end; + + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (start >= end) + continue; - flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PG_LEVEL_4K, - KVM_MAX_HUGEPAGE_LEVEL, - start, end - 1, true, flush); + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, start, + end - 1, true, flush); + } } + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); + write_unlock(&kvm->mmu_lock); } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); - - write_unlock(&kvm->mmu_lock); - if (is_tdp_mmu_enabled(kvm)) { flush = false; @@ -5547,12 +5563,15 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, int start_level) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, - start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, + start_level, KVM_MAX_HUGEPAGE_LEVEL, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5622,16 +5641,15 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_memory_slot *slot = (struct kvm_memory_slot *)memslot; bool flush; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { - flush = false; - read_lock(&kvm->mmu_lock); flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); if (flush) @@ -5658,11 +5676,14 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - write_unlock(&kvm->mmu_lock); + if (kvm->arch.shadow_mmu_active) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5687,6 +5708,14 @@ void kvm_mmu_zap_all(struct kvm *kvm) int ign; write_lock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) + kvm_tdp_mmu_zap_all(kvm); + + if (!kvm->arch.shadow_mmu_active) { + write_unlock(&kvm->mmu_lock); + return; + } + restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (WARN_ON(sp->role.invalid)) @@ -5699,9 +5728,6 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_all(kvm); - write_unlock(&kvm->mmu_lock); } From patchwork Tue Apr 27 22:36:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12227651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AAE4C43461 for ; Tue, 27 Apr 2021 22:36:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F2B060FDB for ; Tue, 27 Apr 2021 22:36:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238863AbhD0Whc (ORCPT ); Tue, 27 Apr 2021 18:37:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237070AbhD0Whc (ORCPT ); Tue, 27 Apr 2021 18:37:32 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C11BFC061574 for ; Tue, 27 Apr 2021 15:36:46 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id n11-20020a25808b0000b02904d9818b80e8so39100700ybk.14 for ; Tue, 27 Apr 2021 15:36:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Y8ULK4WwYoTcErFzD8U7DpN6B10FYVsEEpVfzScuJlc=; b=WVq8MsDkNtUfNVyp+bs9PvhqgqX5BckvCjwHHBkHNrInXaqfhVhtQlDrok6zJDcxqS BIFhZthAH4c5PsSuIrwti22XDPVQ0XMZfWL0MFTTWhoxaQnSl3MTMi74B3LoCzVXYv3P 5L72cEW9AXTQwtZM2Jd9ymLgJ6Yz7MCYLHGJ0C9AqodJBXPxs8u0aUtB3VAWqtkXxAc/ xUaYCf8RcFRwbv+CRFR5LxxCcFwEnt9I1C+migtuAz+IPPimNaMkgBwmO0gRfFmtn4CU TnWKorqPYTR4FTCqSsYIpsMzlHl5D/wiDJTsnN4qG8xzQRpnZLygFO7kV+6CDccdOFWh 6fWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Y8ULK4WwYoTcErFzD8U7DpN6B10FYVsEEpVfzScuJlc=; b=OTYad9jbswYeFgWwqGVQWOGV9kcyXLNLIxpouXRSXg1HwuRVTleqGw9rz5dCbahjMH sjiLe6SUdd5scfoM8CMsq9HbG6olE5I12UJJNWx9id2yB+SgqgCEVAkf+NQjvozWYw2O r6euf2FdZESCZGEpJo3VKgT/akLgP6u7WQaULjfVUGzAhVQEXRlJK9+obyyISFpJMZ2r ijjs2tDD1FbTKVxND5J/D/sOPDY5nsgh+YysNSBgDmngd2JboVALFqhe3YQB17VTb32x 5iG7zAs9EklonrPZCO9XG2hSm0QO6NUSTBi4RcXoWMUpeRjPeM1r8HhoVS/zFoWFcBVW oDRw== X-Gm-Message-State: AOAM530ycRqg3ni88sWiYvQn7zVGQ7bx7ZrKfjeUZA+TYdny5IcYYJwv 9cj4p2xJC8Rc7sGcEFhDJa5m6weVDhsx X-Google-Smtp-Source: ABdhPJzIv6KiFuFYaHtzUXidZsSKOm+bQHFRyYycbWtur/Sruv8TBwpKwk7zPfLyKzen5CndaH0iIhEoF/Af X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:d0b5:c590:c6b:bd9c]) (user=bgardon job=sendgmr) by 2002:a25:5d1:: with SMTP id 200mr17623998ybf.251.1619563006048; Tue, 27 Apr 2021 15:36:46 -0700 (PDT) Date: Tue, 27 Apr 2021 15:36:32 -0700 In-Reply-To: <20210427223635.2711774-1-bgardon@google.com> Message-Id: <20210427223635.2711774-4-bgardon@google.com> Mime-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 3/6] KVM: x86/mmu: Deduplicate rmap freeing in allocate_memslot_rmap From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small code deduplication. No functional change expected. Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cf3b67679cf0..5bcf07465c47 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10818,17 +10818,23 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_hv_destroy_vm(kvm); } -void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +static void free_memslot_rmap(struct kvm_memory_slot *slot) { int i; for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.rmap[i]); slot->arch.rmap[i] = NULL; + } +} - if (i == 0) - continue; +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + int i; + + free_memslot_rmap(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } @@ -10894,12 +10900,9 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, return 0; out_free: - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { - kvfree(slot->arch.rmap[i]); - slot->arch.rmap[i] = NULL; - if (i == 0) - continue; + free_memslot_rmap(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } From patchwork Tue Apr 27 22:36:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12227653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 534C6C433ED for ; Tue, 27 Apr 2021 22:36:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38D7360FDB for ; Tue, 27 Apr 2021 22:36:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239081AbhD0Whf (ORCPT ); Tue, 27 Apr 2021 18:37:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239075AbhD0Whe (ORCPT ); Tue, 27 Apr 2021 18:37:34 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 346A7C061574 for ; Tue, 27 Apr 2021 15:36:49 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id h22-20020aa786d60000b029027d0956e914so85707pfo.23 for ; Tue, 27 Apr 2021 15:36:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dtfxclLFIBm6xVbo9OSqpPdh4KZCVh7xSZdF1u0sKXc=; b=eTO7SN1Gt4G354q/wrJKcHjRQS9pdTQywJzItPU2gesfRRbz0NV1NLuLguHnL54ljX VQwKBJ+WMYwKFZz7quFU1558YIGkHdw+ODQ/LAWrSNlxbd8Txsg5FR+Cjad3SoSM2WbV lpjXQDRYn6oDIKe1hfvdcxhqFkQjeCWmid8tS7oqOxqEZhbv8UFvEi2sHouTLgm9dHSq aoXTY/wxRPoi3Vin2stzZkptZC6pAZRHfwODV1YNU0U/Um2nYcUnDZjFoVA0T7Corg9W LYwoKgPdKmaUZ18xF5XsOMzPW5qSTetg4CbYD3iuN9tkMkl79iuVXq1va9vho25oVFmI mnoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dtfxclLFIBm6xVbo9OSqpPdh4KZCVh7xSZdF1u0sKXc=; b=e3Po9kytKHNPHGUggVMh+y5lrF/ceebsOPJza1O2Do+LzdNeaoP7lPQ1WMCHd9AD4C B4cJ+DJWSqadx/S88M9MQl/QQN1LpRS7F/2amlde+cDf0SuqSUjiu7TloRdqA408ikbO fxI99MhN0nXX1tNr4Ne5z9g6xNhW9PQOFgHxP2AHF4mW+MYJkivSLFQGs7efWhhdk9c0 vYOTkB+jj+ZiHnuIuOnWoeGYOpucPxULtRAO5hitpsvfB4qvUBVh/95uDxX4q5XuuLaw K+rEJAwe8IRyMXup3EN9hNm2k47X9Q3S1H+dvgk2jeY9sGj7T43ELF1n09Z8YsRX+zVw tzjQ== X-Gm-Message-State: AOAM5314gM5LlbLiM1nSVmDWKO6bxzTAI7ACmmxIRSvVBKz6R+TegazG ppzBt8Out52qoAjwr+HLfDah/o30Drrj X-Google-Smtp-Source: ABdhPJwkKEJR2YImyKXGXiLxtnPy1zYqJmYxDX+MkuJ+PbPovvbvJVTuQikpN4iWhxlo+YFxHFyUrNaLNAhF X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:d0b5:c590:c6b:bd9c]) (user=bgardon job=sendgmr) by 2002:a62:1b4d:0:b029:253:ccef:409d with SMTP id b74-20020a621b4d0000b0290253ccef409dmr25212067pfb.4.1619563008656; Tue, 27 Apr 2021 15:36:48 -0700 (PDT) Date: Tue, 27 Apr 2021 15:36:33 -0700 In-Reply-To: <20210427223635.2711774-1-bgardon@google.com> Message-Id: <20210427223635.2711774-5-bgardon@google.com> Mime-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 4/6] KVM: x86/mmu: Factor out allocating memslot rmap From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small refactor to facilitate allocating rmaps for all memslots at once. No functional change expected. Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 41 ++++++++++++++++++++++++++++++++--------- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5bcf07465c47..fc32a7dbe4c4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10842,10 +10842,37 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) kvm_page_track_free_memslot(slot); } +static int alloc_memslot_rmap(struct kvm_memory_slot *slot, + unsigned long npages) +{ + int i; + + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + int lpages; + int level = i + 1; + + lpages = gfn_to_index(slot->base_gfn + npages - 1, + slot->base_gfn, level) + 1; + + slot->arch.rmap[i] = + kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), + GFP_KERNEL_ACCOUNT); + if (!slot->arch.rmap[i]) + goto out_free; + } + + return 0; + +out_free: + free_memslot_rmap(slot); + return -ENOMEM; +} + static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, unsigned long npages) { int i; + int r; /* * Clear out the previous array pointers for the KVM_MR_MOVE case. The @@ -10854,7 +10881,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + r = alloc_memslot_rmap(slot, npages); + if (r) + return r; + + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; unsigned long ugfn; int lpages; @@ -10863,14 +10894,6 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, lpages = gfn_to_index(slot->base_gfn + npages - 1, slot->base_gfn, level) + 1; - slot->arch.rmap[i] = - kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), - GFP_KERNEL_ACCOUNT); - if (!slot->arch.rmap[i]) - goto out_free; - if (i == 0) - continue; - linfo = kvcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT); if (!linfo) goto out_free; From patchwork Tue Apr 27 22:36:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12227655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4164C433B4 for ; Tue, 27 Apr 2021 22:36:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C4682610FC for ; Tue, 27 Apr 2021 22:36:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239195AbhD0Whi (ORCPT ); Tue, 27 Apr 2021 18:37:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239118AbhD0Whg (ORCPT ); Tue, 27 Apr 2021 18:37:36 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8BAFC061574 for ; Tue, 27 Apr 2021 15:36:51 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id lk6-20020a17090b33c6b029015542757d77so7438078pjb.3 for ; Tue, 27 Apr 2021 15:36:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IwwNOUpk3ODt0Lpl9liH9pYZxU/VRIgyeYGkiNpqM5w=; b=quk1wa4rWp3dK12YcQjVsfLL4dyW+yOEEHY2AkcN8Th/bxv3sbWM1lvSvRwKnfLOxa /moY5aE6K+FxsgqjgVKMNKpnjPr21k/yCKTrP7CDPSKW9orDE2OCEU6qQn6uMhVmWbCo CNq75iQGnqGEbsRa9VW9xKjU3b+t2mUmNoGwuGwJLdk4Tkq2QxoFF5w0Vb98nHBB+EqI NUKEgqems7QkfFNFE5BVWibioB0nQ96IYP9IZPcXgTFrjuWVdvGHCvxh5owl3y9A9/+4 9IwuW0asFBNSrTA5RIsLFienCUhU0z3PR9HkYWyI6GLHFYc9AwJidTs/z/pb/e34vVVU wJvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IwwNOUpk3ODt0Lpl9liH9pYZxU/VRIgyeYGkiNpqM5w=; b=T+tY5rztK2xb+/gJ/mB29pX4cKRYPKfOSvhkXaXlGPZpViz7xinAICM9ikZiVGqXKY CiqV9ZBRiW8xzcdl/G79n3Ven4UER9MEVbs2bZWjRKBED82SHQY3lCVELrwWE8WuOQRo e31mBbiA7pfxD2jc0189kytVsY+3lP2HJQH6bCC1IRMW/e5CfljDt2Lrk+R3tYAwHMK4 aNLQIQ6d4Cmc0SofiQwh+lRzRcpMDyAS7O4ty3FOkPEKOtAIXOV3lp65Fk54Q751Mr3f JzkGFKz+xRRWW9DRWlGaNOir0iayoTIKIVowOwTjopQ+q4FmQWBMtjuO2ij4JOPBjk55 l5pA== X-Gm-Message-State: AOAM533HnnnsI5noVyo5p1W3RwssqFpDfBsSvJCRTRS2Jh+7ZK+Pu8oj 60u6+8KGSzmHagoVS5WiCntPReIEy50P X-Google-Smtp-Source: ABdhPJyMHAuPSNTcfcChXZ0AEdBIMZGH8gDHz8umzXTm5wXnsKIryUAQ15jVdQlGkvBM9jddFf3/ZyLKcP+X X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:d0b5:c590:c6b:bd9c]) (user=bgardon job=sendgmr) by 2002:aa7:9571:0:b029:259:1f95:27db with SMTP id x17-20020aa795710000b02902591f9527dbmr25833597pfq.54.1619563011250; Tue, 27 Apr 2021 15:36:51 -0700 (PDT) Date: Tue, 27 Apr 2021 15:36:34 -0700 In-Reply-To: <20210427223635.2711774-1-bgardon@google.com> Message-Id: <20210427223635.2711774-6-bgardon@google.com> Mime-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Adds a lock around memslots changes. Currently this lock does not have any effect on the syncronization model, but it will be used in a future commit to facilitate lazy rmap allocation. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 5 +++++ arch/x86/kvm/x86.c | 11 +++++++++++ include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 9 ++++++++- 4 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3900dcf2439e..bce7fa152473 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1124,6 +1124,11 @@ struct kvm_arch { #endif /* CONFIG_X86_64 */ bool shadow_mmu_active; + + /* + * Protects kvm->memslots. + */ + struct mutex memslot_assignment_lock; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fc32a7dbe4c4..30234fe96f48 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10649,6 +10649,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) raw_spin_lock_init(&kvm->arch.tsc_write_lock); mutex_init(&kvm->arch.apic_map_lock); spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock); + mutex_init(&kvm->arch.memslot_assignment_lock); kvm->arch.kvmclock_offset = -get_kvmclock_base_ns(); pvclock_update_vm_gtod_copy(kvm); @@ -10868,6 +10869,16 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, return -ENOMEM; } + +void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, + struct kvm_memslots *slots) +{ + mutex_lock(&kvm->arch.memslot_assignment_lock); + rcu_assign_pointer(kvm->memslots[as_id], slots); + mutex_unlock(&kvm->arch.memslot_assignment_lock); +} + + static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, unsigned long npages) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8895b95b6a22..146bb839c754 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -720,6 +720,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, enum kvm_mr_change change); +void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, + struct kvm_memslots *slots); void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2799c6660cce..e62a37bc5b90 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1270,6 +1270,12 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m return 0; } +__weak void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, + struct kvm_memslots *slots) +{ + rcu_assign_pointer(kvm->memslots[as_id], slots); +} + static struct kvm_memslots *install_new_memslots(struct kvm *kvm, int as_id, struct kvm_memslots *slots) { @@ -1279,7 +1285,8 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; - rcu_assign_pointer(kvm->memslots[as_id], slots); + kvm_arch_assign_memslots(kvm, as_id, slots); + synchronize_srcu_expedited(&kvm->srcu); /* From patchwork Tue Apr 27 22:36:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12227657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 693FDC433ED for ; Tue, 27 Apr 2021 22:36:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 44266610FC for ; Tue, 27 Apr 2021 22:36:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239282AbhD0Whk (ORCPT ); Tue, 27 Apr 2021 18:37:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239210AbhD0Whj (ORCPT ); Tue, 27 Apr 2021 18:37:39 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F5C1C061574 for ; Tue, 27 Apr 2021 15:36:54 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id lk6-20020a17090b33c6b029015542757d77so7438112pjb.3 for ; Tue, 27 Apr 2021 15:36:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wnFZTvl8EEbsYQ+vfX7Oqp9eLYvZL9QjLRirQ5ivVVo=; b=iGmv7hSeTrxwEawroigP5gIKdlxSL3Uc6MsodK43syZFD13gel/UY2U8kgtW4Gzp5K 9Mqk5pkERwoqAA5wLLs8N0YqFtx3SoJ8AeFcMcDcFUJkt7KDJrEErQX/yGxtu8URedVW JlPq+pqCxiGAp6DPrqxjORmryuRLG0gVGVfOulf5QEhLeeBR+3ScDOKDmlTAywgDD1rx NHqXx/yaLXKBIh+JcV+1YrNaT+DP/iOAPQWDuqzKAtmHugUKZA/HLitHAmERNMU7I3iy CQ/64K0HtHprjcIzlwgf8Qg/VtAaEmx6bagh3r+rCz77bscpfVMG7Q5MMzYwLBAE3dUS I5Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wnFZTvl8EEbsYQ+vfX7Oqp9eLYvZL9QjLRirQ5ivVVo=; b=FDZKQ8RGrcR8xsYYxwHhK7/SBTw34mpkPVdTrrtp9ZDXYrqz1gMfS8tO0nX0i+o8Mv SNfkxmrQcXJ2ZMXLQKL6lpCjufa9dQMk/Hw/KfWZd5slVQ+0yw2WMf/MQ0bei1bRCeux b91EXQYDwKJVzXsuGnVdOvz/TxSueLiWMYPL/2FBorcgwF9NuCifR6gf1Ft8wjQzUG3t 5uZmgiNzJyDO3xrjPhqVEPhKdcKVpSSJDbDcYNV7/5vzYLnjWdhMia7BKJiTcZN76gYX GYNNrLyidcPo8WHYRT2hu45qgDWVMTzEBYg9w1cRdhbIDj0VsSRQ8/MvG5DBGbgKs8hy 5fMw== X-Gm-Message-State: AOAM531hjtkc6WIwleGeXphb/sZBW3zzkKePv7kBFJB/kSYuXlGGv9/d 65LSpRwSrC054T0bheipdE5VdgGkMDbF X-Google-Smtp-Source: ABdhPJwZpdKKKElyqoqoNGOBHNLzlOx8FpoPNdEuwe4nQ0i5gNz1PtGwWPIbb46DG0pV97KVY9+Ya8pFowig X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:d0b5:c590:c6b:bd9c]) (user=bgardon job=sendgmr) by 2002:a62:5f87:0:b029:263:d07d:e88e with SMTP id t129-20020a625f870000b0290263d07de88emr25296876pfb.39.1619563013537; Tue, 27 Apr 2021 15:36:53 -0700 (PDT) Date: Tue, 27 Apr 2021 15:36:35 -0700 In-Reply-To: <20210427223635.2711774-1-bgardon@google.com> Message-Id: <20210427223635.2711774-7-bgardon@google.com> Mime-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH 6/6] KVM: x86/mmu: Lazily allocate memslot rmaps From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the TDP MMU is in use, wait to allocate the rmaps until the shadow MMU is actually used. (i.e. a nested VM is launched.) This saves memory equal to 0.2% of guest memory in cases where the TDP MMU is used and there are no nested guests involved. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 15 +++++++- arch/x86/kvm/mmu/mmu.c | 21 ++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/x86.c | 68 ++++++++++++++++++++++++++++++--- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 43 +++++++++++++++------ 6 files changed, 129 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index bce7fa152473..9ce4cfaf6539 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1126,7 +1126,18 @@ struct kvm_arch { bool shadow_mmu_active; /* - * Protects kvm->memslots. + * If set, the rmap should be allocated for any newly created or + * modified memslots. If allocating rmaps lazily, this may be set + * before the rmaps are allocated for existing memslots, but + * shadow_mmu_active will not be set until after the rmaps are fully + * allocated. Protected by the memslot assignment lock, below. + */ + bool alloc_memslot_rmaps; + + /* + * Protects kvm->memslots and alloc_memslot_rmaps (above) to ensure + * that once alloc_memslot_rmaps is set, no memslot is left without an + * rmap. */ struct mutex memslot_assignment_lock; }; @@ -1860,4 +1871,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) int kvm_cpu_dirty_log_size(void); +int alloc_all_memslots_rmaps(struct kvm *kvm); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e252af46f205..b2a6585bd978 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3125,9 +3125,17 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return ret; } -void activate_shadow_mmu(struct kvm *kvm) +int activate_shadow_mmu(struct kvm *kvm) { + int r; + + r = alloc_all_memslots_rmaps(kvm); + if (r) + return r; + kvm->arch.shadow_mmu_active = true; + + return 0; } static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, @@ -3300,7 +3308,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } - activate_shadow_mmu(vcpu->kvm); + r = activate_shadow_mmu(vcpu->kvm); + if (r) + return r; write_lock(&vcpu->kvm->mmu_lock); r = make_mmu_pages_available(vcpu); @@ -5491,7 +5501,12 @@ void kvm_mmu_init_vm(struct kvm *kvm) struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; if (!kvm_mmu_init_tdp_mmu(kvm)) - activate_shadow_mmu(kvm); + /* + * No memslots can have been allocated at this point. + * activate_shadow_mmu won't actually need to allocate + * rmaps, so it cannot fail. + */ + WARN_ON(activate_shadow_mmu(kvm)); node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 297a911c018c..c6b21a916452 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -165,6 +165,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); -void activate_shadow_mmu(struct kvm *kvm); +int activate_shadow_mmu(struct kvm *kvm); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 30234fe96f48..1aca39673168 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10843,11 +10843,24 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) kvm_page_track_free_memslot(slot); } -static int alloc_memslot_rmap(struct kvm_memory_slot *slot, +static int alloc_memslot_rmap(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned long npages) { int i; + if (!kvm->arch.alloc_memslot_rmaps) + return 0; + + /* + * All rmaps for a memslot should be allocated either before + * the memslot is installed (in which case no other threads + * should have a pointer to it), or under the + * memslot_assignment_lock. Avoid overwriting already allocated + * rmaps. + */ + if (slot->arch.rmap[0]) + return 0; + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { int lpages; int level = i + 1; @@ -10869,17 +10882,62 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, return -ENOMEM; } +int alloc_memslots_rmaps(struct kvm *kvm, struct kvm_memslots *slots) +{ + struct kvm_memory_slot *slot; + int r = 0; + + kvm_for_each_memslot(slot, slots) { + r = alloc_memslot_rmap(kvm, slot, slot->npages); + if (r) + break; + } + return r; +} + +int alloc_all_memslots_rmaps(struct kvm *kvm) +{ + struct kvm_memslots *slots; + int r = 0; + int i; -void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, + mutex_lock(&kvm->arch.memslot_assignment_lock); + kvm->arch.alloc_memslot_rmaps = true; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + r = alloc_memslots_rmaps(kvm, slots); + if (r) + break; + } + mutex_unlock(&kvm->arch.memslot_assignment_lock); + return r; +} + +int kvm_arch_assign_memslots(struct kvm *kvm, int as_id, struct kvm_memslots *slots) { + int r; + mutex_lock(&kvm->arch.memslot_assignment_lock); + + if (kvm->arch.alloc_memslot_rmaps) { + r = alloc_memslots_rmaps(kvm, slots); + if (r) { + mutex_unlock(&kvm->arch.memslot_assignment_lock); + return r; + } + } + rcu_assign_pointer(kvm->memslots[as_id], slots); mutex_unlock(&kvm->arch.memslot_assignment_lock); + + return 0; } -static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, +static int kvm_alloc_memslot_metadata(struct kvm *kvm, + struct kvm_memory_slot *slot, unsigned long npages) { int i; @@ -10892,7 +10950,7 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - r = alloc_memslot_rmap(slot, npages); + r = alloc_memslot_rmap(kvm, slot, npages); if (r) return r; @@ -10965,7 +11023,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(memslot, + return kvm_alloc_memslot_metadata(kvm, memslot, mem->memory_size >> PAGE_SHIFT); return 0; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 146bb839c754..0a34491a5c40 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -720,7 +720,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, struct kvm_memory_slot *memslot, const struct kvm_userspace_memory_region *mem, enum kvm_mr_change change); -void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, +int kvm_arch_assign_memslots(struct kvm *kvm, int as_id, struct kvm_memslots *slots); void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e62a37bc5b90..657e29ce8a05 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1270,22 +1270,31 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m return 0; } -__weak void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, +__weak int kvm_arch_assign_memslots(struct kvm *kvm, int as_id, struct kvm_memslots *slots) { rcu_assign_pointer(kvm->memslots[as_id], slots); + return 0; } -static struct kvm_memslots *install_new_memslots(struct kvm *kvm, - int as_id, struct kvm_memslots *slots) +static int install_new_memslots(struct kvm *kvm, int as_id, + struct kvm_memslots *slots, + struct kvm_memslots **old_slots) { - struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id); - u64 gen = old_memslots->generation; + u64 gen; + int r; + + *old_slots = __kvm_memslots(kvm, as_id); + gen = (*old_slots)->generation; WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; - kvm_arch_assign_memslots(kvm, as_id, slots); + r = kvm_arch_assign_memslots(kvm, as_id, slots); + if (r) { + old_slots = NULL; + return r; + } synchronize_srcu_expedited(&kvm->srcu); @@ -1310,7 +1319,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, slots->generation = gen; - return old_memslots; + return 0; } /* @@ -1346,6 +1355,7 @@ static int kvm_set_memslot(struct kvm *kvm, enum kvm_mr_change change) { struct kvm_memory_slot *slot; + struct kvm_memslots *old_slots; struct kvm_memslots *slots; int r; @@ -1367,7 +1377,10 @@ static int kvm_set_memslot(struct kvm *kvm, * dropped by update_memslots anyway. We'll also revert to the * old memslots if preparing the new memory region fails. */ - slots = install_new_memslots(kvm, as_id, slots); + r = install_new_memslots(kvm, as_id, slots, &old_slots); + if (r) + goto out_free; + slots = old_slots; /* From this point no new shadow pages pointing to a deleted, * or moved, memslot will be created. @@ -1384,7 +1397,10 @@ static int kvm_set_memslot(struct kvm *kvm, goto out_slots; update_memslots(slots, new, change); - slots = install_new_memslots(kvm, as_id, slots); + r = install_new_memslots(kvm, as_id, slots, &old_slots); + if (r) + goto out_slots; + slots = old_slots; kvm_arch_commit_memory_region(kvm, mem, old, new, change); @@ -1392,8 +1408,13 @@ static int kvm_set_memslot(struct kvm *kvm, return 0; out_slots: - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) - slots = install_new_memslots(kvm, as_id, slots); + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + r = install_new_memslots(kvm, as_id, slots, &old_slots); + if (r) + goto out_slots; + slots = old_slots; + } +out_free: kvfree(slots); return r; }