diff mbox

[00/16] KVM: x86: MMU page fault clean-up

Message ID 20191206235729.29263-1-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Sean Christopherson Dec. 6, 2019, 11:57 p.m. UTC
The original purpose of this series was to call thp_adjust() from
__direct_map() and FNAME(fetch) to eliminate a page refcounting quirk[*].
Before doing that, I wanted to clean up the large page handling so that
the map/fetch funtions weren't being passed multiple booleans that tracked
the same basic info.  While trying to decipher all the the interactions,
I stumbled across a handful of fun things:

  - 32-bit KVM w/ TDP is completely broken with respect to 64-bit GPAs due
    to the page fault handlers and all related flows dropping bits 63:32
    of the GPA.  As a result, KVM inserts the wrong GPA and the guest hangs
    because it generates EPT/NPT faults until it's killed.

  - The TDP and non-paging page fault flows are identical except for
    one-off constraints on guest page size.

  - The !VALID_PAGE(root_hpa) checks in the page fault flows are bogus.
    They were added a few years ago to "fix" a nVMX bug and are no longer
    needed now that nVMX is in much better shape.

Patch 1 fixes the 32-bit KVM w/ TDP issue.  More details below.

Patches 2-12 are 99% refactoring to merge TDP and non-paging page fault
handling, and to do the thp_adjust() move.  These are basically nops from
a functional perspective.  There are technically functional changes in a
few patches, but they are very superficial and in theory won't be
observable in normal usage.

Patches 13-16 add WARNs on the !VALID_PAGE(root_hpa) checks to make it
clear that root_hpa is expected to be valid when handling page faults,
e.g. for the longest time I thought KVM relied on the checks in map/fetch
to correctly handle kvm_mmu_zap_all().


32-bit KVM w/ TDP:

I marked this patch for stable because it's obviously a bug fix, but I'm
entirely not sure we want to backport the fix.  Obviously no userspace VMM
is actually exposing 64-bit GPAs to its guests, i.e. odds are this won't
actually fix any real world use cases.  And, the scope of the changes are
likely going to make backporting a pain.  But, on the other hand, if it's
not backported then future bug fixes in related code are likely to
conflict, and it does fix the case where a buggy guest kernel accesses a
non-existent 64-bit GPA (crashes instead of hanging indefinitely).

I'm also not confident I found all the cases where KVM is truncating the
GPA.  AFAIK, 32-bit Qemu simply doesn't support 64-bit GPAs.  To confirm
the bug and verify the fix, I hacked KVM and the guest kernel to generate
64-bit GPAs when remapping MMIO, which covers a tiny fragment of KVM.


[*] https://lkml.kernel.org/r/20191126174603.GB22233@linux.intel.com

Sean Christopherson (16):
  KVM: x86: Use gpa_t for cr2/gpa to fix TDP support on 32-bit KVM
  KVM: x86/mmu: Move definition of make_mmu_pages_available() up
  KVM: x86/mmu: Fold nonpaging_map() into nonpaging_page_fault()
  KVM: x86/mmu: Move nonpaging_page_fault() below try_async_pf()
  KVM: x86/mmu: Refactor handling of cache consistency with TDP
  KVM: x86/mmu: Refactor the per-slot level calculation in
    mapping_level()
  KVM: x86/mmu: Refactor handling of forced 4k pages in page faults
  KVM: x86/mmu: Incorporate guest's page level into max level for shadow
    MMU
  KVM: x86/mmu: Persist gfn_lpage_is_disallowed() to max_level
  KVM: x86/mmu: Rename lpage_disallowed to account_disallowed_nx_lpage
  KVM: x86/mmu: Consolidate tdp_page_fault() and nonpaging_page_fault()
  KVM: x86/mmu: Move transparent_hugepage_adjust() above __direct_map()
  KVM: x86/mmu: Move calls to thp_adjust() down a level
  KVM: x86/mmu: Move root_hpa validity checks to top of page fault
    handler
  KVM: x86/mmu: WARN on an invalid root_hpa
  KVM: x86/mmu: WARN if root_hpa is invalid when handling a page fault

 arch/x86/include/asm/kvm_host.h |   8 +-
 arch/x86/kvm/mmu/mmu.c          | 438 ++++++++++++++------------------
 arch/x86/kvm/mmu/paging_tmpl.h  |  58 +++--
 arch/x86/kvm/mmutrace.h         |  12 +-
 arch/x86/kvm/x86.c              |  40 ++-
 arch/x86/kvm/x86.h              |   2 +-
 include/linux/kvm_host.h        |   6 +-
 virt/kvm/async_pf.c             |  10 +-
 8 files changed, 259 insertions(+), 315 deletions(-)

Comments

Paolo Bonzini Dec. 9, 2019, 3:31 p.m. UTC | #1
On 07/12/19 00:57, Sean Christopherson wrote:
> The original purpose of this series was to call thp_adjust() from
> __direct_map() and FNAME(fetch) to eliminate a page refcounting quirk[*].
> Before doing that, I wanted to clean up the large page handling so that
> the map/fetch funtions weren't being passed multiple booleans that tracked
> the same basic info.  While trying to decipher all the the interactions,
> I stumbled across a handful of fun things:
> 
>   - 32-bit KVM w/ TDP is completely broken with respect to 64-bit GPAs due
>     to the page fault handlers and all related flows dropping bits 63:32
>     of the GPA.  As a result, KVM inserts the wrong GPA and the guest hangs
>     because it generates EPT/NPT faults until it's killed.
> 
>   - The TDP and non-paging page fault flows are identical except for
>     one-off constraints on guest page size.
> 
>   - The !VALID_PAGE(root_hpa) checks in the page fault flows are bogus.
>     They were added a few years ago to "fix" a nVMX bug and are no longer
>     needed now that nVMX is in much better shape.
> 
> Patch 1 fixes the 32-bit KVM w/ TDP issue.  More details below.
> 
> Patches 2-12 are 99% refactoring to merge TDP and non-paging page fault
> handling, and to do the thp_adjust() move.  These are basically nops from
> a functional perspective.  There are technically functional changes in a
> few patches, but they are very superficial and in theory won't be
> observable in normal usage.
> 
> Patches 13-16 add WARNs on the !VALID_PAGE(root_hpa) checks to make it
> clear that root_hpa is expected to be valid when handling page faults,
> e.g. for the longest time I thought KVM relied on the checks in map/fetch
> to correctly handle kvm_mmu_zap_all().
> 
> 
> 32-bit KVM w/ TDP:
> 
> I marked this patch for stable because it's obviously a bug fix, but I'm
> entirely not sure we want to backport the fix.  Obviously no userspace VMM
> is actually exposing 64-bit GPAs to its guests, i.e. odds are this won't
> actually fix any real world use cases.  And, the scope of the changes are
> likely going to make backporting a pain.  But, on the other hand, if it's
> not backported then future bug fixes in related code are likely to
> conflict, and it does fix the case where a buggy guest kernel accesses a
> non-existent 64-bit GPA (crashes instead of hanging indefinitely).
> 
> I'm also not confident I found all the cases where KVM is truncating the
> GPA.  AFAIK, 32-bit Qemu simply doesn't support 64-bit GPAs.  To confirm
> the bug and verify the fix, I hacked KVM and the guest kernel to generate
> 64-bit GPAs when remapping MMIO, which covers a tiny fragment of KVM.

Queued, thanks!

Paolo

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fa46fbed60013..49a59bcb32117 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5737,7 +5737,7 @@ static int emulator_read_write(struct x86_emulate_ctxt *ctxt,
>         vcpu->run->mmio.len = min(8u, vcpu->mmio_fragments[0].len);
>         vcpu->run->mmio.is_write = vcpu->mmio_is_write = ops->write;
>         vcpu->run->exit_reason = KVM_EXIT_MMIO;
> -       vcpu->run->mmio.phys_addr = gpa;
> +       vcpu->run->mmio.phys_addr = gpa & 0xffffffffull;
>  
>         return ops->read_write_exit_mmio(vcpu, gpa, val, bytes);
>  }
> 
> 
> diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
> index b9c78f3bcd673..e22f254987bea 100644
> --- a/arch/x86/mm/ioremap.c
> +++ b/arch/x86/mm/ioremap.c
> @@ -184,7 +184,10 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
>         if (kernel_map_sync_memtype(phys_addr, size, pcm))
>                 goto err_free_area;
>  
> -       if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
> +       BUG_ON(!boot_cpu_data.x86_phys_bits);
> +
> +       if (ioremap_page_range(vaddr, vaddr + size,
> +                              phys_addr | BIT_ULL(boot_cpu_data.x86_phys_bits - 1), prot))
>                 goto err_free_area;
>  
>         ret_addr = (void __iomem *) (vaddr + offset);
> 
> [*] https://lkml.kernel.org/r/20191126174603.GB22233@linux.intel.com
> 
> Sean Christopherson (16):
>   KVM: x86: Use gpa_t for cr2/gpa to fix TDP support on 32-bit KVM
>   KVM: x86/mmu: Move definition of make_mmu_pages_available() up
>   KVM: x86/mmu: Fold nonpaging_map() into nonpaging_page_fault()
>   KVM: x86/mmu: Move nonpaging_page_fault() below try_async_pf()
>   KVM: x86/mmu: Refactor handling of cache consistency with TDP
>   KVM: x86/mmu: Refactor the per-slot level calculation in
>     mapping_level()
>   KVM: x86/mmu: Refactor handling of forced 4k pages in page faults
>   KVM: x86/mmu: Incorporate guest's page level into max level for shadow
>     MMU
>   KVM: x86/mmu: Persist gfn_lpage_is_disallowed() to max_level
>   KVM: x86/mmu: Rename lpage_disallowed to account_disallowed_nx_lpage
>   KVM: x86/mmu: Consolidate tdp_page_fault() and nonpaging_page_fault()
>   KVM: x86/mmu: Move transparent_hugepage_adjust() above __direct_map()
>   KVM: x86/mmu: Move calls to thp_adjust() down a level
>   KVM: x86/mmu: Move root_hpa validity checks to top of page fault
>     handler
>   KVM: x86/mmu: WARN on an invalid root_hpa
>   KVM: x86/mmu: WARN if root_hpa is invalid when handling a page fault
> 
>  arch/x86/include/asm/kvm_host.h |   8 +-
>  arch/x86/kvm/mmu/mmu.c          | 438 ++++++++++++++------------------
>  arch/x86/kvm/mmu/paging_tmpl.h  |  58 +++--
>  arch/x86/kvm/mmutrace.h         |  12 +-
>  arch/x86/kvm/x86.c              |  40 ++-
>  arch/x86/kvm/x86.h              |   2 +-
>  include/linux/kvm_host.h        |   6 +-
>  virt/kvm/async_pf.c             |  10 +-
>  8 files changed, 259 insertions(+), 315 deletions(-)
>
diff mbox

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fa46fbed60013..49a59bcb32117 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5737,7 +5737,7 @@  static int emulator_read_write(struct x86_emulate_ctxt *ctxt,
        vcpu->run->mmio.len = min(8u, vcpu->mmio_fragments[0].len);
        vcpu->run->mmio.is_write = vcpu->mmio_is_write = ops->write;
        vcpu->run->exit_reason = KVM_EXIT_MMIO;
-       vcpu->run->mmio.phys_addr = gpa;
+       vcpu->run->mmio.phys_addr = gpa & 0xffffffffull;
 
        return ops->read_write_exit_mmio(vcpu, gpa, val, bytes);
 }


diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index b9c78f3bcd673..e22f254987bea 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -184,7 +184,10 @@  static void __iomem *__ioremap_caller(resource_size_t phys_addr,
        if (kernel_map_sync_memtype(phys_addr, size, pcm))
                goto err_free_area;
 
-       if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
+       BUG_ON(!boot_cpu_data.x86_phys_bits);
+
+       if (ioremap_page_range(vaddr, vaddr + size,
+                              phys_addr | BIT_ULL(boot_cpu_data.x86_phys_bits - 1), prot))
                goto err_free_area;
 
        ret_addr = (void __iomem *) (vaddr + offset);