mbox series

[0/9] KVM: x86/mmu: Always enable the TDP MMU when TDP is enabled

Message ID 20220815230110.2266741-1-dmatlack@google.com (mailing list archive)
Headers show
Series KVM: x86/mmu: Always enable the TDP MMU when TDP is enabled | expand

Message

David Matlack Aug. 15, 2022, 11:01 p.m. UTC
Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
use the TDP MMU when TDP hardware support is enabled.  The rest of the
patches are related cleanups that follow (although the kvm_faultin_pfn()
cleanups at the end are only tangentially related at best).

The TDP MMU was introduced in 5.10 and has been enabled by default since
5.15. At this point there are no known functionality gaps between the
TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
better with the number of vCPUs. In other words, there is no good reason
to disable the TDP MMU.

Dropping the ability to disable the TDP MMU reduces the number of
possible configurations that need to be tested to validate KVM (i.e. no
need to test with tdp_mmu=N), and simplifies the code.

David Matlack (9):
  KVM: x86/mmu: Always enable the TDP MMU when TDP is enabled
  KVM: x86/mmu: Drop kvm->arch.tdp_mmu_enabled
  KVM: x86/mmu: Consolidate mmu_seq calculations in kvm_faultin_pfn()
  KVM: x86/mmu: Rename __direct_map() to nonpaging_map()
  KVM: x86/mmu: Separate TDP and non-paging fault handling
  KVM: x86/mmu: Stop needlessly making MMU pages available for TDP MMU
    faults
  KVM: x86/mmu: Handle "error PFNs" in kvm_faultin_pfn()
  KVM: x86/mmu: Avoid memslot lookup during KVM_PFN_ERR_HWPOISON
    handling
  KVM: x86/mmu: Try to handle no-slot faults during kvm_faultin_pfn()

 .../admin-guide/kernel-parameters.txt         |   3 +-
 arch/x86/include/asm/kvm_host.h               |   9 -
 arch/x86/kvm/mmu.h                            |   8 +-
 arch/x86/kvm/mmu/mmu.c                        | 197 ++++++++++--------
 arch/x86/kvm/mmu/mmu_internal.h               |   1 +
 arch/x86/kvm/mmu/paging_tmpl.h                |  10 +-
 arch/x86/kvm/mmu/tdp_mmu.c                    |   9 +-
 7 files changed, 115 insertions(+), 122 deletions(-)


base-commit: 93472b79715378a2386598d6632c654a2223267b

Comments

David Matlack Aug. 15, 2022, 11:09 p.m. UTC | #1
On Mon, Aug 15, 2022 at 4:01 PM David Matlack <dmatlack@google.com> wrote:
>
> Try to handle faults on GFNs that do not have a backing memslot during
> kvm_faultin_pfn(), rather than relying on the caller to call
> handle_abnormal_pfn() right after kvm_faultin_pfn(). This reduces all of
> the page fault paths by eliminating duplicate code.
>
> Opportunistically tweak the comment about handling gfn > host.MAXPHYADDR
> to reflect that the effect of returning RET_PF_EMULATE at that point is
> to avoid creating an MMIO SPTE for such GFNs.
>
> No functional change intended.
>
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
>  arch/x86/kvm/mmu/mmu.c         | 55 +++++++++++++++++-----------------
>  arch/x86/kvm/mmu/paging_tmpl.h |  4 ---
>  2 files changed, 27 insertions(+), 32 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
[...]
> @@ -4181,6 +4185,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>         if (unlikely(is_error_pfn(fault->pfn)))
>                 return kvm_handle_error_pfn(fault);
>
> +       if (unlikely(!fault->slot))
> +               return kvm_handle_noslot_fault(vcpu, fault, ACC_ALL);

This is broken. This needs to be pte_access for the shadow paging
case, not ACC_ALL. I remember now I had that in an earlier version but
it got lost at some point when I was rebasing locally.
Peter Zijlstra Aug. 16, 2022, 8:16 a.m. UTC | #2
On Mon, Aug 15, 2022 at 04:01:01PM -0700, David Matlack wrote:
> Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> use the TDP MMU when TDP hardware support is enabled.  The rest of the
> patches are related cleanups that follow (although the kvm_faultin_pfn()
> cleanups at the end are only tangentially related at best).
> 
> The TDP MMU was introduced in 5.10 and has been enabled by default since
> 5.15. At this point there are no known functionality gaps between the
> TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
> better with the number of vCPUs. In other words, there is no good reason
> to disable the TDP MMU.

Then how are you going to test the shadow mmu code -- which I assume is
still relevant for the platforms that don't have this hardware support
you speak of?
David Matlack Aug. 16, 2022, 4:30 p.m. UTC | #3
On Tue, Aug 16, 2022 at 1:17 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Aug 15, 2022 at 04:01:01PM -0700, David Matlack wrote:
> > Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> > use the TDP MMU when TDP hardware support is enabled.  The rest of the
> > patches are related cleanups that follow (although the kvm_faultin_pfn()
> > cleanups at the end are only tangentially related at best).
> >
> > The TDP MMU was introduced in 5.10 and has been enabled by default since
> > 5.15. At this point there are no known functionality gaps between the
> > TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
> > better with the number of vCPUs. In other words, there is no good reason
> > to disable the TDP MMU.
>
> Then how are you going to test the shadow mmu code -- which I assume is
> still relevant for the platforms that don't have this hardware support
> you speak of?

TDP hardware support can still be disabled with module parameters
(kvm_intel.ept=N and kvm_amd.npt=N).

The tdp_mmu module parameter only controls whether KVM uses the TDP
MMU or shadow MMU *when TDP hardware is enabled*.
David Matlack Aug. 16, 2022, 10:54 p.m. UTC | #4
On Mon, Aug 15, 2022 at 4:01 PM David Matlack <dmatlack@google.com> wrote:
>
> Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> use the TDP MMU when TDP hardware support is enabled.  The rest of the
> patches are related cleanups that follow (although the kvm_faultin_pfn()
> cleanups at the end are only tangentially related at best).

Please feel free to ignore the kvm_faultin_pfn() cleanups, i.e.
patches 7-9. They conflict with Peter Xu's series [1], there is a bug
in patch 9, and they are only tangentially related.

[1] https://lore.kernel.org/kvm/20220721000318.93522-4-peterx@redhat.com/
Peter Zijlstra Aug. 17, 2022, 8:53 a.m. UTC | #5
On Tue, Aug 16, 2022 at 09:30:54AM -0700, David Matlack wrote:
> On Tue, Aug 16, 2022 at 1:17 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, Aug 15, 2022 at 04:01:01PM -0700, David Matlack wrote:
> > > Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> > > use the TDP MMU when TDP hardware support is enabled.  The rest of the
> > > patches are related cleanups that follow (although the kvm_faultin_pfn()
> > > cleanups at the end are only tangentially related at best).
> > >
> > > The TDP MMU was introduced in 5.10 and has been enabled by default since
> > > 5.15. At this point there are no known functionality gaps between the
> > > TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
> > > better with the number of vCPUs. In other words, there is no good reason
> > > to disable the TDP MMU.
> >
> > Then how are you going to test the shadow mmu code -- which I assume is
> > still relevant for the platforms that don't have this hardware support
> > you speak of?
> 
> TDP hardware support can still be disabled with module parameters
> (kvm_intel.ept=N and kvm_amd.npt=N).
> 
> The tdp_mmu module parameter only controls whether KVM uses the TDP
> MMU or shadow MMU *when TDP hardware is enabled*.

Ah, fair enough. Carry on.
Kai Huang Aug. 17, 2022, 10:01 a.m. UTC | #6
On Tue, 2022-08-16 at 09:30 -0700, David Matlack wrote:
> On Tue, Aug 16, 2022 at 1:17 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > On Mon, Aug 15, 2022 at 04:01:01PM -0700, David Matlack wrote:
> > > Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> > > use the TDP MMU when TDP hardware support is enabled.  The rest of the
> > > patches are related cleanups that follow (although the kvm_faultin_pfn()
> > > cleanups at the end are only tangentially related at best).
> > > 
> > > The TDP MMU was introduced in 5.10 and has been enabled by default since
> > > 5.15. At this point there are no known functionality gaps between the
> > > TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
> > > better with the number of vCPUs. In other words, there is no good reason
> > > to disable the TDP MMU.
> > 
> > Then how are you going to test the shadow mmu code -- which I assume is
> > still relevant for the platforms that don't have this hardware support
> > you speak of?
> 
> TDP hardware support can still be disabled with module parameters
> (kvm_intel.ept=N and kvm_amd.npt=N).
> 
> The tdp_mmu module parameter only controls whether KVM uses the TDP
> MMU or shadow MMU *when TDP hardware is enabled*.

With the tdp_mmu module parameter, when we develop some code, we can at least
easily test legacy MMU code (that it is still working) when *TDP hardware is
enabled* by turning the parameter off.  Or when there's some problem with TDP
MMU code, we can easily switch to use legacy MMU.

Do we want to lose those flexibilities?
David Matlack Aug. 17, 2022, 4:42 p.m. UTC | #7
On Wed, Aug 17, 2022 at 3:01 AM Huang, Kai <kai.huang@intel.com> wrote:
>
> On Tue, 2022-08-16 at 09:30 -0700, David Matlack wrote:
> > On Tue, Aug 16, 2022 at 1:17 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Mon, Aug 15, 2022 at 04:01:01PM -0700, David Matlack wrote:
> > > > Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> > > > use the TDP MMU when TDP hardware support is enabled.  The rest of the
> > > > patches are related cleanups that follow (although the kvm_faultin_pfn()
> > > > cleanups at the end are only tangentially related at best).
> > > >
> > > > The TDP MMU was introduced in 5.10 and has been enabled by default since
> > > > 5.15. At this point there are no known functionality gaps between the
> > > > TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
> > > > better with the number of vCPUs. In other words, there is no good reason
> > > > to disable the TDP MMU.
> > >
> > > Then how are you going to test the shadow mmu code -- which I assume is
> > > still relevant for the platforms that don't have this hardware support
> > > you speak of?
> >
> > TDP hardware support can still be disabled with module parameters
> > (kvm_intel.ept=N and kvm_amd.npt=N).
> >
> > The tdp_mmu module parameter only controls whether KVM uses the TDP
> > MMU or shadow MMU *when TDP hardware is enabled*.
>
> With the tdp_mmu module parameter, when we develop some code, we can at least
> easily test legacy MMU code (that it is still working) when *TDP hardware is
> enabled* by turning the parameter off.

I am proposing that KVM stops supporting this use-case, so testing it
would no longer be necessary. However, based on Paolo's reply there
might be a snag with 32-bit systems.

> Or when there's some problem with TDP
> MMU code, we can easily switch to use legacy MMU.

Who is "we" in this context? For cloud providers, switching a customer
VM from the TDP MMU back to the shadow MMU to fix an issue is not
feasible. The shadow MMU uses more memory and has worse performance
for larger VMs, i.e. switching comes with significant downsides that
are unlikely to be lower risk than simply rolling out a fix for the
TDP MMU.

Also, over time, the TDP MMU will be less and less likely to have bugs
than the shadow MMU for TDP-enabled use-cases. e.g. The TDP MMU has
been default enabled since 5.15, so it has likely received
significantly more test cycles than the shadow MMU in the past 5
releases.


>
> Do we want to lose those flexibilities?

>
> --
> Thanks,
> -Kai
>
>
Kai Huang Aug. 17, 2022, 11:36 p.m. UTC | #8
On Wed, 2022-08-17 at 09:42 -0700, David Matlack wrote:
> On Wed, Aug 17, 2022 at 3:01 AM Huang, Kai <kai.huang@intel.com> wrote:
> > 
> > On Tue, 2022-08-16 at 09:30 -0700, David Matlack wrote:
> > > On Tue, Aug 16, 2022 at 1:17 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > > > 
> > > > On Mon, Aug 15, 2022 at 04:01:01PM -0700, David Matlack wrote:
> > > > > Patch 1 deletes the module parameter tdp_mmu and forces KVM to always
> > > > > use the TDP MMU when TDP hardware support is enabled.  The rest of the
> > > > > patches are related cleanups that follow (although the kvm_faultin_pfn()
> > > > > cleanups at the end are only tangentially related at best).
> > > > > 
> > > > > The TDP MMU was introduced in 5.10 and has been enabled by default since
> > > > > 5.15. At this point there are no known functionality gaps between the
> > > > > TDP MMU and the shadow MMU, and the TDP MMU uses less memory and scales
> > > > > better with the number of vCPUs. In other words, there is no good reason
> > > > > to disable the TDP MMU.
> > > > 
> > > > Then how are you going to test the shadow mmu code -- which I assume is
> > > > still relevant for the platforms that don't have this hardware support
> > > > you speak of?
> > > 
> > > TDP hardware support can still be disabled with module parameters
> > > (kvm_intel.ept=N and kvm_amd.npt=N).
> > > 
> > > The tdp_mmu module parameter only controls whether KVM uses the TDP
> > > MMU or shadow MMU *when TDP hardware is enabled*.
> > 
> > With the tdp_mmu module parameter, when we develop some code, we can at least
> > easily test legacy MMU code (that it is still working) when *TDP hardware is
> > enabled* by turning the parameter off.
> 
> I am proposing that KVM stops supporting this use-case, so testing it
> would no longer be necessary. However, based on Paolo's reply there
> might be a snag with 32-bit systems.
> 
> > Or when there's some problem with TDP
> > MMU code, we can easily switch to use legacy MMU.
> 
> Who is "we" in this context? For cloud providers, switching a customer
> VM from the TDP MMU back to the shadow MMU to fix an issue is not
> feasible. The shadow MMU uses more memory and has worse performance
> for larger VMs, i.e. switching comes with significant downsides that
> are unlikely to be lower risk than simply rolling out a fix for the
> TDP MMU.
> 
> Also, over time, the TDP MMU will be less and less likely to have bugs
> than the shadow MMU for TDP-enabled use-cases. e.g. The TDP MMU has
> been default enabled since 5.15, so it has likely received
> significantly more test cycles than the shadow MMU in the past 5
> releases.

"We" means normal KVM patch developers.  Yes I agree "switching to use legacy
MMU" isn't a good argument from cloud providers' point of view.