diff mbox series

[v3,01/10] KVM: x86/mmu: Change tdp_mmu to a read-only parameter

Message ID 20220921173546.2674386-2-dmatlack@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Make tdp_mmu read-only and clean up TPD MMU fault handler | expand

Commit Message

David Matlack Sept. 21, 2022, 5:35 p.m. UTC
Change tdp_mmu to a read-only parameter and drop the per-vm
tdp_mmu_enabled. For 32-bit KVM, make tdp_mmu_enabled a macro that is
always false so that the compiler can continue omitting cals to the TDP
MMU.

The TDP MMU was introduced in 5.10 and has been enabled by default since
5.15. At this point there are no known functionality gaps between the
TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
better with the number of vCPUs. In other words, there is no good reason
to disable the TDP MMU on a live system.

Purposely do not drop tdp_mmu=N support (i.e. do not force 64-bit KVM to
always use the TDP MMU) since tdp_mmu=N is still used to get test
coverage of KVM's shadow MMU TDP support, which is used in 32-bit KVM.

Signed-off-by: David Matlack <dmatlack@google.com>
---
 arch/x86/include/asm/kvm_host.h |  9 ------
 arch/x86/kvm/mmu.h              |  6 ++--
 arch/x86/kvm/mmu/mmu.c          | 51 ++++++++++++++++++++++-----------
 arch/x86/kvm/mmu/tdp_mmu.c      |  9 ++----
 4 files changed, 39 insertions(+), 36 deletions(-)

Comments

Huang, Kai Sept. 27, 2022, 9:19 a.m. UTC | #1
>  
> +bool __ro_after_init tdp_mmu_allowed;
> +

[...]

> @@ -5662,6 +5669,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>  	tdp_root_level = tdp_forced_root_level;
>  	max_tdp_level = tdp_max_root_level;
>  
> +#ifdef CONFIG_X86_64
> +	tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled;
> +#endif
> 

[...]

> @@ -6661,6 +6671,13 @@ void __init kvm_mmu_x86_module_init(void)
>  	if (nx_huge_pages == -1)
>  		__set_nx_huge_pages(get_nx_auto_mode());
>  
> +	/*
> +	 * Snapshot userspace's desire to enable the TDP MMU. Whether or not the
> +	 * TDP MMU is actually enabled is determined in kvm_configure_mmu()
> +	 * when the vendor module is loaded.
> +	 */
> +	tdp_mmu_allowed = tdp_mmu_enabled;
> +
>  	kvm_mmu_spte_module_init();
>  }
> 

Sorry last time I didn't review deeply, but I am wondering why do we need
'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
disabled in the first call, so it can be against user's desire in the second
call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.

So, should we just do below in kvm_configure_mmu()?

	#ifdef CONFIG_X86_64
	if (!tdp_enabled)
		tdp_mmu_enabled = false;
	#endif
David Matlack Sept. 27, 2022, 4:14 p.m. UTC | #2
On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
>
>
> >
> > +bool __ro_after_init tdp_mmu_allowed;
> > +
>
> [...]
>
> > @@ -5662,6 +5669,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> >       tdp_root_level = tdp_forced_root_level;
> >       max_tdp_level = tdp_max_root_level;
> >
> > +#ifdef CONFIG_X86_64
> > +     tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled;
> > +#endif
> >
>
> [...]
>
> > @@ -6661,6 +6671,13 @@ void __init kvm_mmu_x86_module_init(void)
> >       if (nx_huge_pages == -1)
> >               __set_nx_huge_pages(get_nx_auto_mode());
> >
> > +     /*
> > +      * Snapshot userspace's desire to enable the TDP MMU. Whether or not the
> > +      * TDP MMU is actually enabled is determined in kvm_configure_mmu()
> > +      * when the vendor module is loaded.
> > +      */
> > +     tdp_mmu_allowed = tdp_mmu_enabled;
> > +
> >       kvm_mmu_spte_module_init();
> >  }
> >
>
> Sorry last time I didn't review deeply, but I am wondering why do we need
> 'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
> kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
> disabled in the first call, so it can be against user's desire in the second
> call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
> as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.

tdp_mmu_allowed is needed because kvm_intel and kvm_amd are separate
modules from kvm. So kvm_configure_mmu() can be called multiple times
(each time kvm_intel or kvm_amd is loaded).

>
> So, should we just do below in kvm_configure_mmu()?
>
>         #ifdef CONFIG_X86_64
>         if (!tdp_enabled)
>                 tdp_mmu_enabled = false;
>         #endif
>
>
> --
> Thanks,
> -Kai
>
>
Huang, Kai Sept. 27, 2022, 9:10 p.m. UTC | #3
On Tue, 2022-09-27 at 09:14 -0700, David Matlack wrote:
> On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
> > 
> > 
> > > 
> > > +bool __ro_after_init tdp_mmu_allowed;
> > > +
> > 
> > [...]
> > 
> > > @@ -5662,6 +5669,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> > >       tdp_root_level = tdp_forced_root_level;
> > >       max_tdp_level = tdp_max_root_level;
> > > 
> > > +#ifdef CONFIG_X86_64
> > > +     tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled;
> > > +#endif
> > > 
> > 
> > [...]
> > 
> > > @@ -6661,6 +6671,13 @@ void __init kvm_mmu_x86_module_init(void)
> > >       if (nx_huge_pages == -1)
> > >               __set_nx_huge_pages(get_nx_auto_mode());
> > > 
> > > +     /*
> > > +      * Snapshot userspace's desire to enable the TDP MMU. Whether or not the
> > > +      * TDP MMU is actually enabled is determined in kvm_configure_mmu()
> > > +      * when the vendor module is loaded.
> > > +      */
> > > +     tdp_mmu_allowed = tdp_mmu_enabled;
> > > +
> > >       kvm_mmu_spte_module_init();
> > >  }
> > > 
> > 
> > Sorry last time I didn't review deeply, but I am wondering why do we need
> > 'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
> > kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
> > disabled in the first call, so it can be against user's desire in the second
> > call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
> > as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.
> 
> tdp_mmu_allowed is needed because kvm_intel and kvm_amd are separate
> modules from kvm. So kvm_configure_mmu() can be called multiple times
> (each time kvm_intel or kvm_amd is loaded).
> 
> 

Indeed. :)

Reviewed-by: Kai Huang <kai.huang@intel.com>
Huang, Kai Sept. 29, 2022, 1:56 a.m. UTC | #4
On Tue, 2022-09-27 at 21:10 +0000, Huang, Kai wrote:
> On Tue, 2022-09-27 at 09:14 -0700, David Matlack wrote:
> > On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
> > > 
> > > 
> > > > 
> > > > +bool __ro_after_init tdp_mmu_allowed;
> > > > +

Nit: it appears this can be static.
Isaku Yamahata Oct. 3, 2022, 6:58 p.m. UTC | #5
On Tue, Sep 27, 2022 at 09:10:43PM +0000,
"Huang, Kai" <kai.huang@intel.com> wrote:

> On Tue, 2022-09-27 at 09:14 -0700, David Matlack wrote:
> > On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
> > > 
> > > 
> > > > 
> > > > +bool __ro_after_init tdp_mmu_allowed;
> > > > +
> > > 
> > > [...]
> > > 
> > > > @@ -5662,6 +5669,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> > > >       tdp_root_level = tdp_forced_root_level;
> > > >       max_tdp_level = tdp_max_root_level;
> > > > 
> > > > +#ifdef CONFIG_X86_64
> > > > +     tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled;
> > > > +#endif
> > > > 
> > > 
> > > [...]
> > > 
> > > > @@ -6661,6 +6671,13 @@ void __init kvm_mmu_x86_module_init(void)
> > > >       if (nx_huge_pages == -1)
> > > >               __set_nx_huge_pages(get_nx_auto_mode());
> > > > 
> > > > +     /*
> > > > +      * Snapshot userspace's desire to enable the TDP MMU. Whether or not the
> > > > +      * TDP MMU is actually enabled is determined in kvm_configure_mmu()
> > > > +      * when the vendor module is loaded.
> > > > +      */
> > > > +     tdp_mmu_allowed = tdp_mmu_enabled;
> > > > +
> > > >       kvm_mmu_spte_module_init();
> > > >  }
> > > > 
> > > 
> > > Sorry last time I didn't review deeply, but I am wondering why do we need
> > > 'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
> > > kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
> > > disabled in the first call, so it can be against user's desire in the second
> > > call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
> > > as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.
> > 
> > tdp_mmu_allowed is needed because kvm_intel and kvm_amd are separate
> > modules from kvm. So kvm_configure_mmu() can be called multiple times
> > (each time kvm_intel or kvm_amd is loaded).
> > 
> > 
> 
> Indeed. :)
> 
> Reviewed-by: Kai Huang <kai.huang@intel.com>

kvm_arch_init() which is called early during the module initialization before
kvm_configure_mmu() via kvm_arch_hardware_setup() checks if the vendor module
(kvm_intel or kvm_amd) was already loaded.  If yes, it results in -EEXIST.

So kvm_configure_mmu() won't be called twice.
David Matlack Oct. 3, 2022, 8:02 p.m. UTC | #6
On Mon, Oct 03, 2022 at 11:58:34AM -0700, Isaku Yamahata wrote:
> On Tue, Sep 27, 2022 at 09:10:43PM +0000, "Huang, Kai" <kai.huang@intel.com> wrote:
> > On Tue, 2022-09-27 at 09:14 -0700, David Matlack wrote:
> > > On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
> > > > 
> > > > Sorry last time I didn't review deeply, but I am wondering why do we need
> > > > 'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
> > > > kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
> > > > disabled in the first call, so it can be against user's desire in the second
> > > > call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
> > > > as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.
> > > 
> > > tdp_mmu_allowed is needed because kvm_intel and kvm_amd are separate
> > > modules from kvm. So kvm_configure_mmu() can be called multiple times
> > > (each time kvm_intel or kvm_amd is loaded).
> > > 
> > > 
> > 
> > Indeed. :)
> > 
> > Reviewed-by: Kai Huang <kai.huang@intel.com>
> 
> kvm_arch_init() which is called early during the module initialization before
> kvm_configure_mmu() via kvm_arch_hardware_setup() checks if the vendor module
> (kvm_intel or kvm_amd) was already loaded.  If yes, it results in -EEXIST.
> 
> So kvm_configure_mmu() won't be called twice.

kvm_configure_mmu() can be called multiple times if the vendor module is
unloaded without unloading the kvm module. For example:

 $ modprobe kvm
 $ modprobe kvm_intel ept=Y  # kvm_configure_mmu(true, ...)
 $ modprobe -r kvm_intel
 $ modprobe kvm_intel ept=N  # kvm_configure_mmu(false, ...)
Huang, Kai Oct. 3, 2022, 8:11 p.m. UTC | #7
On Mon, 2022-10-03 at 11:58 -0700, Isaku Yamahata wrote:
> On Tue, Sep 27, 2022 at 09:10:43PM +0000,
> "Huang, Kai" <kai.huang@intel.com> wrote:
> 
> > On Tue, 2022-09-27 at 09:14 -0700, David Matlack wrote:
> > > On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
> > > > 
> > > > 
> > > > > 
> > > > > +bool __ro_after_init tdp_mmu_allowed;
> > > > > +
> > > > 
> > > > [...]
> > > > 
> > > > > @@ -5662,6 +5669,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> > > > >       tdp_root_level = tdp_forced_root_level;
> > > > >       max_tdp_level = tdp_max_root_level;
> > > > > 
> > > > > +#ifdef CONFIG_X86_64
> > > > > +     tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled;
> > > > > +#endif
> > > > > 
> > > > 
> > > > [...]
> > > > 
> > > > > @@ -6661,6 +6671,13 @@ void __init kvm_mmu_x86_module_init(void)
> > > > >       if (nx_huge_pages == -1)
> > > > >               __set_nx_huge_pages(get_nx_auto_mode());
> > > > > 
> > > > > +     /*
> > > > > +      * Snapshot userspace's desire to enable the TDP MMU. Whether or not the
> > > > > +      * TDP MMU is actually enabled is determined in kvm_configure_mmu()
> > > > > +      * when the vendor module is loaded.
> > > > > +      */
> > > > > +     tdp_mmu_allowed = tdp_mmu_enabled;
> > > > > +
> > > > >       kvm_mmu_spte_module_init();
> > > > >  }
> > > > > 
> > > > 
> > > > Sorry last time I didn't review deeply, but I am wondering why do we need
> > > > 'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
> > > > kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
> > > > disabled in the first call, so it can be against user's desire in the second
> > > > call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
> > > > as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.
> > > 
> > > tdp_mmu_allowed is needed because kvm_intel and kvm_amd are separate
> > > modules from kvm. So kvm_configure_mmu() can be called multiple times
> > > (each time kvm_intel or kvm_amd is loaded).
> > > 
> > > 
> > 
> > Indeed. :)
> > 
> > Reviewed-by: Kai Huang <kai.huang@intel.com>
> 
> kvm_arch_init() which is called early during the module initialization before
> kvm_configure_mmu() via kvm_arch_hardware_setup() checks if the vendor module
> (kvm_intel or kvm_amd) was already loaded.  If yes, it results in -EEXIST.
> 
> So kvm_configure_mmu() won't be called twice.
> 

Hi Isaku,

Please consider module reload.
Isaku Yamahata Oct. 3, 2022, 9:56 p.m. UTC | #8
On Mon, Oct 03, 2022 at 01:02:42PM -0700,
David Matlack <dmatlack@google.com> wrote:

> On Mon, Oct 03, 2022 at 11:58:34AM -0700, Isaku Yamahata wrote:
> > On Tue, Sep 27, 2022 at 09:10:43PM +0000, "Huang, Kai" <kai.huang@intel.com> wrote:
> > > On Tue, 2022-09-27 at 09:14 -0700, David Matlack wrote:
> > > > On Tue, Sep 27, 2022 at 2:19 AM Huang, Kai <kai.huang@intel.com> wrote:
> > > > > 
> > > > > Sorry last time I didn't review deeply, but I am wondering why do we need
> > > > > 'tdp_mmu_allowed' at all?  The purpose of having 'allow_mmio_caching' is because
> > > > > kvm_mmu_set_mmio_spte_mask() is called twice, and 'enable_mmio_caching' can be
> > > > > disabled in the first call, so it can be against user's desire in the second
> > > > > call.  However it appears for 'tdp_mmu_enabled' we don't need 'tdp_mmu_allowed',
> > > > > as kvm_configure_mmu() is only called once by VMX or SVM, if I read correctly.
> > > > 
> > > > tdp_mmu_allowed is needed because kvm_intel and kvm_amd are separate
> > > > modules from kvm. So kvm_configure_mmu() can be called multiple times
> > > > (each time kvm_intel or kvm_amd is loaded).
> > > > 
> > > > 
> > > 
> > > Indeed. :)
> > > 
> > > Reviewed-by: Kai Huang <kai.huang@intel.com>
> > 
> > kvm_arch_init() which is called early during the module initialization before
> > kvm_configure_mmu() via kvm_arch_hardware_setup() checks if the vendor module
> > (kvm_intel or kvm_amd) was already loaded.  If yes, it results in -EEXIST.
> > 
> > So kvm_configure_mmu() won't be called twice.
> 
> kvm_configure_mmu() can be called multiple times if the vendor module is
> unloaded without unloading the kvm module. For example:
> 
>  $ modprobe kvm
>  $ modprobe kvm_intel ept=Y  # kvm_configure_mmu(true, ...)
>  $ modprobe -r kvm_intel
>  $ modprobe kvm_intel ept=N  # kvm_configure_mmu(false, ...)

Oh, yes, you're right.
Sean Christopherson Oct. 11, 2022, 8:12 p.m. UTC | #9
On Wed, Sep 21, 2022, David Matlack wrote:
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index 6bdaacb6faa0..168c46fd8dd1 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -230,14 +230,14 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm)
>  }
>  
>  #ifdef CONFIG_X86_64
> -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; }
> +extern bool tdp_mmu_enabled;
>  #else
> -static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; }
> +#define tdp_mmu_enabled false
>  #endif

Rather than open code references to the variable, keep the wrappers so that the
guts can be changed without needing to churn a pile of code.  I'll follow-up in
the "Split out TDP MMU page fault handling" with the reasoning.

E.g.

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 6bdaacb6faa0..1ad6d02e103f 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -230,14 +230,21 @@ static inline bool kvm_shadow_root_allocated(struct kvm *kvm)
 }
 
 #ifdef CONFIG_X86_64
-static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; }
+extern bool tdp_mmu_enabled;
+#endif
+
+static inline bool is_tdp_mmu_enabled(void)
+{
+#ifdef CONFIG_X86_64
+	return tdp_mmu_enabled;
 #else
-static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; }
+	return false;
 #endif
+}
 
 static inline bool kvm_memslots_have_rmaps(struct kvm *kvm)
 {
-	return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm);
+	return !is_tdp_mmu_enabled() || kvm_shadow_root_allocated(kvm);
 }
 
 static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
diff mbox series

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c96c43c313a..d76059270a43 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1262,15 +1262,6 @@  struct kvm_arch {
 	struct task_struct *nx_lpage_recovery_thread;
 
 #ifdef CONFIG_X86_64
-	/*
-	 * Whether the TDP MMU is enabled for this VM. This contains a
-	 * snapshot of the TDP MMU module parameter from when the VM was
-	 * created and remains unchanged for the life of the VM. If this is
-	 * true, TDP MMU handler functions will run for various MMU
-	 * operations.
-	 */
-	bool tdp_mmu_enabled;
-
 	/*
 	 * List of struct kvm_mmu_pages being used as roots.
 	 * All struct kvm_mmu_pages in the list should have
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 6bdaacb6faa0..168c46fd8dd1 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -230,14 +230,14 @@  static inline bool kvm_shadow_root_allocated(struct kvm *kvm)
 }
 
 #ifdef CONFIG_X86_64
-static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; }
+extern bool tdp_mmu_enabled;
 #else
-static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; }
+#define tdp_mmu_enabled false
 #endif
 
 static inline bool kvm_memslots_have_rmaps(struct kvm *kvm)
 {
-	return !is_tdp_mmu_enabled(kvm) || kvm_shadow_root_allocated(kvm);
+	return !tdp_mmu_enabled || kvm_shadow_root_allocated(kvm);
 }
 
 static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e418ef3ecfcb..ccb0b18fd194 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -98,6 +98,13 @@  module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644);
  */
 bool tdp_enabled = false;
 
+bool __ro_after_init tdp_mmu_allowed;
+
+#ifdef CONFIG_X86_64
+bool __read_mostly tdp_mmu_enabled = true;
+module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0444);
+#endif
+
 static int max_huge_page_level __read_mostly;
 static int tdp_root_level __read_mostly;
 static int max_tdp_level __read_mostly;
@@ -1253,7 +1260,7 @@  static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 {
 	struct kvm_rmap_head *rmap_head;
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot,
 				slot->base_gfn + gfn_offset, mask, true);
 
@@ -1286,7 +1293,7 @@  static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 {
 	struct kvm_rmap_head *rmap_head;
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot,
 				slot->base_gfn + gfn_offset, mask, false);
 
@@ -1369,7 +1376,7 @@  bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 		}
 	}
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		write_protected |=
 			kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn, min_level);
 
@@ -1532,7 +1539,7 @@  bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
 	if (kvm_memslots_have_rmaps(kvm))
 		flush = kvm_handle_gfn_range(kvm, range, kvm_zap_rmap);
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);
 
 	return flush;
@@ -1545,7 +1552,7 @@  bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 	if (kvm_memslots_have_rmaps(kvm))
 		flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmap);
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range);
 
 	return flush;
@@ -1618,7 +1625,7 @@  bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 	if (kvm_memslots_have_rmaps(kvm))
 		young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap);
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		young |= kvm_tdp_mmu_age_gfn_range(kvm, range);
 
 	return young;
@@ -1631,7 +1638,7 @@  bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
 	if (kvm_memslots_have_rmaps(kvm))
 		young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap);
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		young |= kvm_tdp_mmu_test_age_gfn(kvm, range);
 
 	return young;
@@ -3543,7 +3550,7 @@  static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 	if (r < 0)
 		goto out_unlock;
 
-	if (is_tdp_mmu_enabled(vcpu->kvm)) {
+	if (tdp_mmu_enabled) {
 		root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu);
 		mmu->root.hpa = root;
 	} else if (shadow_root_level >= PT64_ROOT_4LEVEL) {
@@ -5662,6 +5669,9 @@  void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
 	tdp_root_level = tdp_forced_root_level;
 	max_tdp_level = tdp_max_root_level;
 
+#ifdef CONFIG_X86_64
+	tdp_mmu_enabled = tdp_mmu_allowed && tdp_enabled;
+#endif
 	/*
 	 * max_huge_page_level reflects KVM's MMU capabilities irrespective
 	 * of kernel support, e.g. KVM may be capable of using 1GB pages when
@@ -5909,7 +5919,7 @@  static void kvm_mmu_zap_all_fast(struct kvm *kvm)
 	 * write and in the same critical section as making the reload request,
 	 * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield.
 	 */
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		kvm_tdp_mmu_invalidate_all_roots(kvm);
 
 	/*
@@ -5934,7 +5944,7 @@  static void kvm_mmu_zap_all_fast(struct kvm *kvm)
 	 * Deferring the zap until the final reference to the root is put would
 	 * lead to use-after-free.
 	 */
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		kvm_tdp_mmu_zap_invalidated_roots(kvm);
 }
 
@@ -6046,7 +6056,7 @@  void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 
 	flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end);
 
-	if (is_tdp_mmu_enabled(kvm)) {
+	if (tdp_mmu_enabled) {
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
 			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
 						      gfn_end, true, flush);
@@ -6079,7 +6089,7 @@  void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 	}
 
-	if (is_tdp_mmu_enabled(kvm)) {
+	if (tdp_mmu_enabled) {
 		read_lock(&kvm->mmu_lock);
 		kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
 		read_unlock(&kvm->mmu_lock);
@@ -6322,7 +6332,7 @@  void kvm_mmu_try_split_huge_pages(struct kvm *kvm,
 				   u64 start, u64 end,
 				   int target_level)
 {
-	if (!is_tdp_mmu_enabled(kvm))
+	if (!tdp_mmu_enabled)
 		return;
 
 	if (kvm_memslots_have_rmaps(kvm))
@@ -6343,7 +6353,7 @@  void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm,
 	u64 start = memslot->base_gfn;
 	u64 end = start + memslot->npages;
 
-	if (!is_tdp_mmu_enabled(kvm))
+	if (!tdp_mmu_enabled)
 		return;
 
 	if (kvm_memslots_have_rmaps(kvm)) {
@@ -6426,7 +6436,7 @@  void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 	}
 
-	if (is_tdp_mmu_enabled(kvm)) {
+	if (tdp_mmu_enabled) {
 		read_lock(&kvm->mmu_lock);
 		kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);
 		read_unlock(&kvm->mmu_lock);
@@ -6461,7 +6471,7 @@  void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
 		write_unlock(&kvm->mmu_lock);
 	}
 
-	if (is_tdp_mmu_enabled(kvm)) {
+	if (tdp_mmu_enabled) {
 		read_lock(&kvm->mmu_lock);
 		kvm_tdp_mmu_clear_dirty_slot(kvm, memslot);
 		read_unlock(&kvm->mmu_lock);
@@ -6496,7 +6506,7 @@  void kvm_mmu_zap_all(struct kvm *kvm)
 
 	kvm_mmu_commit_zap_page(kvm, &invalid_list);
 
-	if (is_tdp_mmu_enabled(kvm))
+	if (tdp_mmu_enabled)
 		kvm_tdp_mmu_zap_all(kvm);
 
 	write_unlock(&kvm->mmu_lock);
@@ -6661,6 +6671,13 @@  void __init kvm_mmu_x86_module_init(void)
 	if (nx_huge_pages == -1)
 		__set_nx_huge_pages(get_nx_auto_mode());
 
+	/*
+	 * Snapshot userspace's desire to enable the TDP MMU. Whether or not the
+	 * TDP MMU is actually enabled is determined in kvm_configure_mmu()
+	 * when the vendor module is loaded.
+	 */
+	tdp_mmu_allowed = tdp_mmu_enabled;
+
 	kvm_mmu_spte_module_init();
 }
 
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index bf2ccf9debca..e7d0f21fbbe8 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -10,23 +10,18 @@ 
 #include <asm/cmpxchg.h>
 #include <trace/events/kvm.h>
 
-static bool __read_mostly tdp_mmu_enabled = true;
-module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644);
-
 /* Initializes the TDP MMU for the VM, if enabled. */
 int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
 {
 	struct workqueue_struct *wq;
 
-	if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
+	if (!tdp_mmu_enabled)
 		return 0;
 
 	wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
 	if (!wq)
 		return -ENOMEM;
 
-	/* This should not be changed for the lifetime of the VM. */
-	kvm->arch.tdp_mmu_enabled = true;
 	INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots);
 	spin_lock_init(&kvm->arch.tdp_mmu_pages_lock);
 	INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages);
@@ -48,7 +43,7 @@  static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
 
 void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
 {
-	if (!kvm->arch.tdp_mmu_enabled)
+	if (!tdp_mmu_enabled)
 		return;
 
 	/* Also waits for any queued work items.  */