mbox series

[v2,0/3] KVM: x86/mmu: Drop dedicated self-changing mapping code

Message ID 20230202182817.407394-1-seanjc@google.com (mailing list archive)
Headers show
Series KVM: x86/mmu: Drop dedicated self-changing mapping code | expand

Message

Sean Christopherson Feb. 2, 2023, 6:28 p.m. UTC
Excise the MMU's one-off self-changing mapping logic and instead detect
self-changing mappings in the primary "walk" flow, and rely on
kvm_mmu_hugepage_adjust() to naturally handle "disallowed hugepage due to
shadow page" conditions.

When is_self_change_mapping() was first added, KVM did hugepage adjustments
before the primary walk, and so didn't account for shadow pages that were
allocated for the current page fault, i.e. effectively consumed a stale
disallow_lpage.  Now that KVM adjust after allocating new shadow pages, the
one-off code is superfluous.

Dropping the one-off code fixes an issue where KVM will force 4KiB pages
for a 1GiB guest page even when using a 2MiB would be safe (1GiB overlaps
a shadow page but 2MiB does not).

v2:
 - Track the "write #PF to shadow page" using an EMULTYPE flag.
 - Split the main patch in two.

v1: https://lore.kernel.org/all/20221213125538.81209-1-jiangshanlai@gmail.com

Lai Jiangshan (2):
  KVM: x86/mmu: Detect write #PF to shadow pages during FNAME(fetch)
    walk
  KVM: x86/mmu: Remove FNAME(is_self_change_mapping)

Sean Christopherson (1):
  KVM: x86/mmu: Use EMULTYPE flag to track write #PFs to shadow pages

 arch/x86/include/asm/kvm_host.h | 37 +++++++++++---------
 arch/x86/kvm/mmu/mmu.c          |  5 +--
 arch/x86/kvm/mmu/mmu_internal.h | 12 ++++++-
 arch/x86/kvm/mmu/paging_tmpl.h  | 61 ++++++---------------------------
 arch/x86/kvm/x86.c              | 15 ++------
 5 files changed, 46 insertions(+), 84 deletions(-)


base-commit: 11b36fe7d4500c8ef73677c087f302fd713101c2

Comments

Paolo Bonzini March 14, 2023, 1:34 p.m. UTC | #1
Queued, thanks.

Paolo